The Quantitative and Qualitative AI Ethics Lab (QQAEL – pronounced “quale”) at Morgan State University is a team of researchers from diverse backgrounds including philosophy, computer science, mathematics, and social science. We leverage our diverse interdisciplinary skill set to tackle complex problems in ethical AI development, evaluation, and deployment.
A major focus of the lab is on quantitative metrics for ethical values such as fairness, transparency, and individual and collective agency, including questions of the scope and limits of specific metrics and of quantitative metrics in general. Current projects in the lab include critical and constructive work on resolving conflicts between incompatible fairness metrics (“fairness impossibility”); disentangling different concepts and contexts of bias and providing guidance on when biases are ethically acceptable or unacceptable; and exploring how and why “proxies” can, in general, align or fail to align with the values they’re used to measure.
Publications:
(1) G. Waters, M. Mapp, and P. Honenberger, "Decisional Value Scores: A New Family of Metrics for Ethical AI-ML" (2024), AI & Ethics, https://doi.org/10.1007/s43681-024-00504-8
(2) P. Honenberger, "Fairness Impossibility in AI-ML Systems: An Integrated Ethics Approach" (2024), in Mueller, V.C., A.R.Dewey, L. Dung, and G. Loehr (eds.), Philosophy of Artificial Intelligence: The State of the Art, Synthese Library, Berlin: SpringerNature (forthcoming).
(3) P. Honenberger, O. Ola, W. Mapp, & P. Lee, "Effects of Matching on Evaluations of Accuracy, Fairness, and Fairness Impossibility in AI-ML Systems" (2024), The International FLAIRS Conference Proceedings, 37 (1). https://doi.org/10.32473/flairs.37.1.135585
Software:
(1) Worldview Simulator 2.0: A free, interactive software for exploring worldviews with LLMs. Users can select and combine specific cosmological or value orientations, based on prominent positions such as Confucius, Nietzsche, St. Paul, or Emma Goldman. After creating a worldview "agent" through these selections, users can interact via prompts (as with other LLMs) and can compare the worldview agent's responses to the unfiltered responses. (Thanks to QQAEL lab members Olusola Olabanjo (web development, programming, co-conceptualization) and Parris Haynes (co-conceptualization) for their essential contributions to this project.)
A major focus of the lab is on quantitative metrics for ethical values such as fairness, transparency, and individual and collective agency, including questions of the scope and limits of specific metrics and of quantitative metrics in general. Current projects in the lab include critical and constructive work on resolving conflicts between incompatible fairness metrics (“fairness impossibility”); disentangling different concepts and contexts of bias and providing guidance on when biases are ethically acceptable or unacceptable; and exploring how and why “proxies” can, in general, align or fail to align with the values they’re used to measure.
Publications:
(1) G. Waters, M. Mapp, and P. Honenberger, "Decisional Value Scores: A New Family of Metrics for Ethical AI-ML" (2024), AI & Ethics, https://doi.org/10.1007/s43681-024-00504-8
(2) P. Honenberger, "Fairness Impossibility in AI-ML Systems: An Integrated Ethics Approach" (2024), in Mueller, V.C., A.R.Dewey, L. Dung, and G. Loehr (eds.), Philosophy of Artificial Intelligence: The State of the Art, Synthese Library, Berlin: SpringerNature (forthcoming).
(3) P. Honenberger, O. Ola, W. Mapp, & P. Lee, "Effects of Matching on Evaluations of Accuracy, Fairness, and Fairness Impossibility in AI-ML Systems" (2024), The International FLAIRS Conference Proceedings, 37 (1). https://doi.org/10.32473/flairs.37.1.135585
Software:
(1) Worldview Simulator 2.0: A free, interactive software for exploring worldviews with LLMs. Users can select and combine specific cosmological or value orientations, based on prominent positions such as Confucius, Nietzsche, St. Paul, or Emma Goldman. After creating a worldview "agent" through these selections, users can interact via prompts (as with other LLMs) and can compare the worldview agent's responses to the unfiltered responses. (Thanks to QQAEL lab members Olusola Olabanjo (web development, programming, co-conceptualization) and Parris Haynes (co-conceptualization) for their essential contributions to this project.)