Date of Award


Document Type


Degree Name

Master of Science (MS)


Computer Engineering and Sciences

First Advisor

Siddhartha Bhattacharyya

Second Advisor

Chiradeep Sen

Third Advisor

Nasheen Nur

Fourth Advisor

Philip J. Bernhard


Artificial Intelligence (AI) makes critical decisions in an opaque way without explaining the reasoning behind them. Decision support systems are often built as black boxes. This has generated interest in Explainable AI (XAI), an area of research that explains AI algorithms and provides more insight into their internal decision-making process. Advancements in XAI have enhanced machine learning (ML) models’ interpretability, explainability, and transparency. Additionally, there has been speculation on whether it is possible to predict the type of dataset used in the model by analyzing the results. In this research, we proposed a methodology for determining the linearity or non-linearity of given datasets. The rule set to differentiate between the datasets is expected to be derived by experiments that include various parameters such as datasets, their types, parameters, metrics, and hyperparameters-driven approaches. As part of this research effort, we distinguish between linear and non-linear datasets. We use a set of datasets of both linear and non-linear nature and observe a pattern that explains the linearity of data used by the black box model.


Copyright held by author