everyone
since 13 Oct 2023">EveryoneRevisionsBibTeX
With the widespread use of AI in socially important decision-making processes, it becomes crucial to ensure that AI-generated decisions do not reflect discrimination towards certain groups or populations. To address this challenge, our research introduces a theoretical framework based on the spider diagram---a reasoning system rooted in first-order predicate logic, and an extended version of the Euler and Venn diagrams---to define and verify the fairness of AI algorithms in decision-making. This framework compares the sets representing the actual outcome of the algorithm and the expected outcome to identify bias in the model. The expected outcome of the model is calculated by considering the similarity score between the individual instances in the dataset. If the set of actual outcomes is a subset of the set of expected outcomes and all constant spiders in the former set have a corresponding foot in the expected outcome set, then the algorithm is free from bias. We further evaluate the performance of the AI model using the spider diagram that replaces the conventional confusion matrix in the literature. The framework also permits us to define a degree of bias and evaluate the same for specific AI models. Experimental results indicate that this framework surpasses traditional approaches in efficiency, with improvements in processing time and a reduced number of function calls.