In April 2023, my colleague Gabriella Waters and I initiated the Interdisciplinary AI & Machine Learning Working Group, a biweekly meeting to discuss papers on the scientific, sociopolitical, and philosophical implications of contemporary AI/ML technologies. The group is hosted and supported by the Center for Equitable AI and Machine Learning Systems (CEAMLS) at Morgan State University.
Researchers from all fields with an interest in AI/ML technologies are encouraged to join our group. Email jaywilliam.honenberger@morgan.edu to be added to the mailing list.
Interdisciplinary AI & Machine Learning Working Group – Mission Statement
There is currently great interest in AI in public, commercial, and academic forums. There remains a disconnect, however, between discussions of AI that focus on technical details such as specific machine learning approaches, models, and data sets, on the one hand, and those that focus on broader social, scientific, and philosophical implications, on the other. This group aims to connect these concerns by reading and discussing papers that engage with multiple dimensions in some way. We welcome researchers from all areas – STEM, social science, and humanities – with an interest in both the technical aspects of AI and machine learning and their larger context and significance.
Meeting Times (subject to change by group agreement):
1st and 3rd Thursdays of each month from 12:00pm-1:00pm EST, by Zoom
Hosts / organizers:
Gabriella Waters (CEAMLS) and Phillip Honenberger (CEAMLS; Dept. of Philosophy & Religious Studies, Morgan State University)
Contact for more information / expressions of interest:
Phillip Honenberger (jaywilliam.honenberger@morgan.edu)
Previous meetings (updated Oct. 17, 2023):
Thurs., Oct. 19 – “AI and health equity”
Introduced by: Gabriella Waters (CEAMLS)
(1) Hendricks-Sturrup et al., "Developing Ethics and Equity Principles, Terms, and Engagement Tools to Advance Health Equity and Researcher Diversity in AI/ML…" (2023)
(2) Supplementary: “An Expert Panel Discussion Embedding Ethics & Equity in AI/ML” Big Data, 2023
Thurs., Oct. 5 – “AI and humor”
Introduced by: Daniel Brunson (Morgan State, Dept. of Philosophy & Religious Studies)
(1) Thomas Winters, "Computers Learning Humor is No Joke" (2021): https://hdsr.mitpress.mit.edu/pub/wi9yky5c/release/3
(2) Kramer, "The Philosophy of Humor: What Makes Something Funny?" (2022): https://1000wordphilosophy.com/2022/11/20/the-philosophy-of-humor/
(3) Anjum & Lieberman, "Exploring Humor in Natural Language Processing" (2023)
Thurs., Sept. 21 – “Explainability of AI/ML systems in healthcare contexts”
Introduced by: Phillip Honenberger (CEAMLS)
(1) Keller et al., "Augmenting Decision Competence in Healthcare Using AI-based Cognitive Models" (2020)
(2) Byeon, "Advances in Machine Learning and Explainable Artificial Intelligence for Depression Prediction" (2023)
Thurs., Sept. 7 – “Gun-detection software in Baltimore schools”; “Operational criteria of consciousness in AI”
Introduced by: Gabriella Waters (CEAMLS) and Phillip Honenberger (CEAMLS)
(1) Wintrode, "Baltimore county schools add gun detection software to 7000 security cameras," Baltimore Banner (2023):
https://www.thebaltimorebanner.com/education/k-12-schools/baltimore-county-schools-gun-detection-2KQS5MJJSNFO5LRVC74PYTTWWM/?schk=YES&rchk=YES&utm_source=The+Baltimore+Banner&utm_campaign=8548221bf4-NL_ALRT_20230824_1100&utm_medium=email&utm_term=0_-8548221bf4-%5BLIST_EMAIL_ID%5D&mc_cid=8548221bf4&mc_eid=e27161f251
(2) Lenharo, "If AI becomes conscious, here's how researchers will know," Nature (2023): https://www.nature.com/articles/d41586-023-02684-5
(3) Finkel, "If AI becomes conscious, how will we know?" Science (2023): https://www.science.org/content/article/if-ai-becomes-conscious-how-will-we-know
(4) Supplementary: Butlin & Long et al., “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness” (2023): https://arxiv.org/abs/2308.08708
Thurs., Aug. 17 – “Big data and health equity”
Introduced by: Odia Kane (JHU)
(1) Doerr & Meeder, "Big Health Data Research and Group Harm: the Scope of IRB Review" (2022)
(2) Tsosie et al., "We Have Gifted Enough: Indigenous Genomic Data Sovereignty in Precision Medicine" (2021)
Thurs., Aug. 3 – “AI and film”
Introduced by: Lara Simmons (CEAMLS)
(1) Tong et al., "The Use of Deep Learning and VR Technology in Film and Television Production...", Frontiers in Psychology (2021): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8080441/
(2) Pontefract, "Can Artificial Intelligence Help the Film Industry?"
Forbes (2023): https://www.forbes.com/sites/danpontefract/2023/04/24/can-artificial-intelligence-help-the-film-industry-it-already-is/?sh=14d1705948a4
(3) Smith, " 'Of course it's disturbing': Will AI change the film industry forever?" Guardian (2023): https://www.theguardian.com/film/2023/mar/23/ai-change-hollywood-film-industry-concern.
Thurs., July 20 - “AI ethics frameworks (meta-analysis)”; “analogy between biological and artificial neural networks”
Introduced by: Phillip Honenberger (CEAMLS)
(1) Hagendorff, “The Ethics of AI Ethics” (2020)
(2) Macpherson et al., “Natural and Artificial Intelligence” (2021)
Thurs., July 6
Meeting canceled due to schedule conflicts
Thurs., June 15 - “Turing test”; “large language models”; “artificial general intelligence (AGI)”
Introduced by: Pihlwa Lee (CEAMLS)
(1) Terry Sejnowski, "Large Language Models and the Reverse Turing Test" (Neural Computation, 2023)
Thurs., June 1 - “AI in education; human-AI interaction; research methods for exploring human-AI interaction”
Introduced by: Lara Simmons (CEAMLS)
(1) Matt Cronin, “Do advances in AI risk a future of human incompetence?” (The Hill, May 2023)
(2) Hiekkilä, “A chatbot that asks questions could help you spot when it makes no sense”
(MIT Technology Review, April 2023)
(3) Danry et al., “Don’t Just Tell Me, Ask Me: AI Systems that Intelligently Frame Explanation as Questions Improve Human Logical Discernment Accuracy over Causal AI explanations.”
CHI’ 23 (Conference on Human Factors in Computing Systems, April 2023)
(4) Jakesch et al., “Co-Writing with Opinionated Language Models Affects Users’ Views.” CHI’ 23 (Conference on Human Factors in Computing Systems, April 2023)
Thurs., May 18 – “Prompt engineering; ChatGPT”
Introduced by: Gabriella Waters (CEAMLS) & William Mapp (CEAMLS)
(1) White et al., "A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT" (Arxiv preprint, 2023)
(2) Sorensen, Robinson, Rytting, et al., "An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels" (Arxiv preprint, 2022)
Thurs., May 4 - “ethics of predictive AI applications; ethics of AI-informed decision making; ChatGPT; agency”
Introduced by: Phillip Honenberger (CEAMLS; Dept. of Philosophy & Religious Studies, Morgan State)
(1) Desai et al., "Against predictive optimization" (2023): https://predictive-optimization.cs.princeton.edu/
(2) Floridi & Chiriatti, "GPT-3: Its Nature, Scope, Limits, and Consequences" (Minds & Machines, 2020): https://link.springer.com/article/10.1007/s11023-020-09548-1
(3) Floridi, "AI as Agency Without Intelligence" (Philosophy & Technology, 2023): https://link.springer.com/article/10.1007/s13347-023-00621-y
Thurs., April 20 – “gating networks; Dynamic mixture of experts models; Wisconsin card sorting task; ANN-brain analogy; ChatGPT; AI “hallucinations”; political reaction to ChatGPT
Introduced by: William Mapp (CEAMLS)
Themes discussed:
Papers read and discussed:
(1) “A modeling framework for adaptive lifelong learning with transfer and savings through gating in the prefrontal cortex” (PNAS, 2020): https://www.pnas.org/doi/10.1073/pnas.2009591117
(2) “ChatGPT Invents Sexual Harassment Scandal” (The Sync, 2023): https://thesyncweekly.com/chatgpt-invents-sexual-harassment-scandal/
(3) “Italy bans ChatGPT” (The Sync, 2023): https://thesyncweekly.com/italy-bans-chatgpt/
Thurs., April 6 – “generative adversarial networks (GANs); linking brain activity to visual experience via network models; COMPAS algorithm; fairness and bias in AI applications”
Introduced by: Gabriella Waters (CEAMLS) and Phillip Honenberger (CEAMLS; Dept. of Philosophy & Religious Studies, Morgan State)
Themes discussed:
Papers read and discussed:
(1) Takagi, Yu and Shinji Nishimoto, “High-resolution image reconstruction with latent diffusion models from human brain activity” (BioRxiv, 2022)
(2) Larson, Jeff, Surya Mattu, Lauren Kirchner, and Julia Angwin, “How we analyzed the COMPAS recidivism algorithm” (ProPublica, 2016): https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
Researchers from all fields with an interest in AI/ML technologies are encouraged to join our group. Email jaywilliam.honenberger@morgan.edu to be added to the mailing list.
Interdisciplinary AI & Machine Learning Working Group – Mission Statement
There is currently great interest in AI in public, commercial, and academic forums. There remains a disconnect, however, between discussions of AI that focus on technical details such as specific machine learning approaches, models, and data sets, on the one hand, and those that focus on broader social, scientific, and philosophical implications, on the other. This group aims to connect these concerns by reading and discussing papers that engage with multiple dimensions in some way. We welcome researchers from all areas – STEM, social science, and humanities – with an interest in both the technical aspects of AI and machine learning and their larger context and significance.
Meeting Times (subject to change by group agreement):
1st and 3rd Thursdays of each month from 12:00pm-1:00pm EST, by Zoom
Hosts / organizers:
Gabriella Waters (CEAMLS) and Phillip Honenberger (CEAMLS; Dept. of Philosophy & Religious Studies, Morgan State University)
Contact for more information / expressions of interest:
Phillip Honenberger (jaywilliam.honenberger@morgan.edu)
Previous meetings (updated Oct. 17, 2023):
Thurs., Oct. 19 – “AI and health equity”
Introduced by: Gabriella Waters (CEAMLS)
(1) Hendricks-Sturrup et al., "Developing Ethics and Equity Principles, Terms, and Engagement Tools to Advance Health Equity and Researcher Diversity in AI/ML…" (2023)
(2) Supplementary: “An Expert Panel Discussion Embedding Ethics & Equity in AI/ML” Big Data, 2023
Thurs., Oct. 5 – “AI and humor”
Introduced by: Daniel Brunson (Morgan State, Dept. of Philosophy & Religious Studies)
(1) Thomas Winters, "Computers Learning Humor is No Joke" (2021): https://hdsr.mitpress.mit.edu/pub/wi9yky5c/release/3
(2) Kramer, "The Philosophy of Humor: What Makes Something Funny?" (2022): https://1000wordphilosophy.com/2022/11/20/the-philosophy-of-humor/
(3) Anjum & Lieberman, "Exploring Humor in Natural Language Processing" (2023)
Thurs., Sept. 21 – “Explainability of AI/ML systems in healthcare contexts”
Introduced by: Phillip Honenberger (CEAMLS)
(1) Keller et al., "Augmenting Decision Competence in Healthcare Using AI-based Cognitive Models" (2020)
(2) Byeon, "Advances in Machine Learning and Explainable Artificial Intelligence for Depression Prediction" (2023)
Thurs., Sept. 7 – “Gun-detection software in Baltimore schools”; “Operational criteria of consciousness in AI”
Introduced by: Gabriella Waters (CEAMLS) and Phillip Honenberger (CEAMLS)
(1) Wintrode, "Baltimore county schools add gun detection software to 7000 security cameras," Baltimore Banner (2023):
https://www.thebaltimorebanner.com/education/k-12-schools/baltimore-county-schools-gun-detection-2KQS5MJJSNFO5LRVC74PYTTWWM/?schk=YES&rchk=YES&utm_source=The+Baltimore+Banner&utm_campaign=8548221bf4-NL_ALRT_20230824_1100&utm_medium=email&utm_term=0_-8548221bf4-%5BLIST_EMAIL_ID%5D&mc_cid=8548221bf4&mc_eid=e27161f251
(2) Lenharo, "If AI becomes conscious, here's how researchers will know," Nature (2023): https://www.nature.com/articles/d41586-023-02684-5
(3) Finkel, "If AI becomes conscious, how will we know?" Science (2023): https://www.science.org/content/article/if-ai-becomes-conscious-how-will-we-know
(4) Supplementary: Butlin & Long et al., “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness” (2023): https://arxiv.org/abs/2308.08708
Thurs., Aug. 17 – “Big data and health equity”
Introduced by: Odia Kane (JHU)
(1) Doerr & Meeder, "Big Health Data Research and Group Harm: the Scope of IRB Review" (2022)
(2) Tsosie et al., "We Have Gifted Enough: Indigenous Genomic Data Sovereignty in Precision Medicine" (2021)
Thurs., Aug. 3 – “AI and film”
Introduced by: Lara Simmons (CEAMLS)
(1) Tong et al., "The Use of Deep Learning and VR Technology in Film and Television Production...", Frontiers in Psychology (2021): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8080441/
(2) Pontefract, "Can Artificial Intelligence Help the Film Industry?"
Forbes (2023): https://www.forbes.com/sites/danpontefract/2023/04/24/can-artificial-intelligence-help-the-film-industry-it-already-is/?sh=14d1705948a4
(3) Smith, " 'Of course it's disturbing': Will AI change the film industry forever?" Guardian (2023): https://www.theguardian.com/film/2023/mar/23/ai-change-hollywood-film-industry-concern.
Thurs., July 20 - “AI ethics frameworks (meta-analysis)”; “analogy between biological and artificial neural networks”
Introduced by: Phillip Honenberger (CEAMLS)
(1) Hagendorff, “The Ethics of AI Ethics” (2020)
(2) Macpherson et al., “Natural and Artificial Intelligence” (2021)
Thurs., July 6
Meeting canceled due to schedule conflicts
Thurs., June 15 - “Turing test”; “large language models”; “artificial general intelligence (AGI)”
Introduced by: Pihlwa Lee (CEAMLS)
(1) Terry Sejnowski, "Large Language Models and the Reverse Turing Test" (Neural Computation, 2023)
Thurs., June 1 - “AI in education; human-AI interaction; research methods for exploring human-AI interaction”
Introduced by: Lara Simmons (CEAMLS)
(1) Matt Cronin, “Do advances in AI risk a future of human incompetence?” (The Hill, May 2023)
(2) Hiekkilä, “A chatbot that asks questions could help you spot when it makes no sense”
(MIT Technology Review, April 2023)
(3) Danry et al., “Don’t Just Tell Me, Ask Me: AI Systems that Intelligently Frame Explanation as Questions Improve Human Logical Discernment Accuracy over Causal AI explanations.”
CHI’ 23 (Conference on Human Factors in Computing Systems, April 2023)
(4) Jakesch et al., “Co-Writing with Opinionated Language Models Affects Users’ Views.” CHI’ 23 (Conference on Human Factors in Computing Systems, April 2023)
Thurs., May 18 – “Prompt engineering; ChatGPT”
Introduced by: Gabriella Waters (CEAMLS) & William Mapp (CEAMLS)
(1) White et al., "A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT" (Arxiv preprint, 2023)
(2) Sorensen, Robinson, Rytting, et al., "An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels" (Arxiv preprint, 2022)
Thurs., May 4 - “ethics of predictive AI applications; ethics of AI-informed decision making; ChatGPT; agency”
Introduced by: Phillip Honenberger (CEAMLS; Dept. of Philosophy & Religious Studies, Morgan State)
(1) Desai et al., "Against predictive optimization" (2023): https://predictive-optimization.cs.princeton.edu/
(2) Floridi & Chiriatti, "GPT-3: Its Nature, Scope, Limits, and Consequences" (Minds & Machines, 2020): https://link.springer.com/article/10.1007/s11023-020-09548-1
(3) Floridi, "AI as Agency Without Intelligence" (Philosophy & Technology, 2023): https://link.springer.com/article/10.1007/s13347-023-00621-y
Thurs., April 20 – “gating networks; Dynamic mixture of experts models; Wisconsin card sorting task; ANN-brain analogy; ChatGPT; AI “hallucinations”; political reaction to ChatGPT
Introduced by: William Mapp (CEAMLS)
Themes discussed:
Papers read and discussed:
(1) “A modeling framework for adaptive lifelong learning with transfer and savings through gating in the prefrontal cortex” (PNAS, 2020): https://www.pnas.org/doi/10.1073/pnas.2009591117
(2) “ChatGPT Invents Sexual Harassment Scandal” (The Sync, 2023): https://thesyncweekly.com/chatgpt-invents-sexual-harassment-scandal/
(3) “Italy bans ChatGPT” (The Sync, 2023): https://thesyncweekly.com/italy-bans-chatgpt/
Thurs., April 6 – “generative adversarial networks (GANs); linking brain activity to visual experience via network models; COMPAS algorithm; fairness and bias in AI applications”
Introduced by: Gabriella Waters (CEAMLS) and Phillip Honenberger (CEAMLS; Dept. of Philosophy & Religious Studies, Morgan State)
Themes discussed:
Papers read and discussed:
(1) Takagi, Yu and Shinji Nishimoto, “High-resolution image reconstruction with latent diffusion models from human brain activity” (BioRxiv, 2022)
(2) Larson, Jeff, Surya Mattu, Lauren Kirchner, and Julia Angwin, “How we analyzed the COMPAS recidivism algorithm” (ProPublica, 2016): https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm