Skip to Content
My MSU

Center for Equitable Artificial Intelligence and Machine Learning Systems


AIM-LIFT: Artificial Intelligence and Machine Learning Interdisciplinary Forum for Theory

What is AIM-LIFT?

AIM-LIFT is a reading and discussion group on AI and its social implications. We meet twice each month to discuss recently published papers.

AIM-LIFT Mission Statement:
Artificial Intelligence is rapidly evolving with still uncharted benefits and costs. Steering a course for an AI in society, and society alongside AI, demands a combination of technical competence and theoretical imaginativeness that is still too rare. AIM-LIFT aims to improve AI futures through technically informed and theoretically sophisticated discussion of AI/ML in its full context, drawn from multiple disciplines, skills, and backgrounds.

Agenda:
We meet on 1st and 3rd Tuesdays of each month, from 12pm-1pm, by Zoom. Someone gives a short introduction to the papers (5-10 minutes) and the group discusses. 

Hosts:
AIM-LIFT is co-hosted by 
Gabriella Waters (gabriella.waters@morgan.edu) and 
Phillip Honenberger (jaywilliam.honenberger@morgan.edu),
both at CEAMLS (the Center for Equitable AI & ML Systems) at Morgan State University.
The group is supported by CEAMLS

To participate in AIM-LIFT, write jaywilliam.honenberger@morgan.edu to be added to the list and receive readings and the Zoom link.

Topics and Readings at Past AIM-LIFT Meetings [updated April 2, 2024]

AI-ML Interdisciplinary Forum for Theory (AIM-LIFT)

AIM-LIFT is an interdisciplinary reading group on AI topics.

We meet on 1st Tuesdays, 12 pm-1 pm, by Zoom, to discuss recently published papers on AI and its implications.

AIM-LIFT is open to anyone with an interest in AI. If you’d like to join AIM-LIFT and receive announcements, Zoom links, and monthly readings, write to jaywilliam.honenberger@morgan.edu with a request to be added.

AIM-LIFT was founded in April 2023 by Gabriella Waters and Phillip Honenberger.


AIM-LIFT Mission Statement:
Artificial Intelligence (AI) and Machine Learning (ML) are rapidly evolving with still uncharted benefits and costs. Steering a course for AI in society, and society alongside AI, demands a combination of technical insight and theoretical imaginativeness that is still too rare. AIM-LIFT contributes to better AI futures through technically informed and theoretically sophisticated discussion of AI/ML in its full context, drawn from multiple disciplines, skills, and backgrounds.

AIM-LIFT is co-organized by Gabriella Waters and Phillip Honenberger (jaywilliam.honenberger@morgan.edu). 

The group is hosted by CEAMLS (the Center for Equitable AI & ML Systems) at Morgan State University.



Tues., May 6, 2025: The Biology of LLMs / AI as Normal Technology”

(1) Lindsey et al., "The biology of a large language model" (2025): https://transformer-circuits.pub/2025/attribution-graphs/biology.html

(2) Narayanan & Kapoor, "AI as normal technology" (2025): https://knightcolumbia.org/content/ai-as-normal-technology#:~:text=Artificial%20Intelligence%20and%20Democratic%20Freedoms&text=We%20articulate%20a%20vision%20of,%E2%80%9Cnormal%E2%80%9D%20in%20our%20conception

April 2025: No meeting [N-SEA 2025]

Tues., March 4, 2025: Existential Risk

(1) Gebru & Torres, "The TESCREAL bundle: Eugenics and the promise of Utopia through artificial intelligence," First Monday (2024)

https://firstmonday.org/ojs/index.php/fm/article/view/13636/11606

(2) Swoboda et al., "Examining popular arguments against AI existential risk: a philosophical analysis." ArXiv (2025). https://arxiv.org/abs/2501.04064

Tues., Feb. 4. 2025: The DeepSeek Disruption

Introduced by: William Mapp (CEAMLS)

(1) Carl Franzen, "Why Everyone in AI is Freaking Out about DeepSeek," VentureBeat (2025): https://venturebeat.com/ai/why-everyone-in-ai-is-freaking-out-about-deepseek/

(2) Deep-Seek-AI, "Deep-Seek-R1" (2025) (https://arxiv.org/pdf/2501.12948)

(3) AI Papers Academy, "DeepSeek R1 Paper Explained": https://aipapersacademy.com/deepseek-r1/

Supplementary / Optional readings:

(4) Shao et al., "Deep Seek Math" (2024), https://arxiv.org/abs/2402.03300

(5) DeepSeekAI, "Deep-Seek-V3" (2024), https://arxiv.org/abs/2412.19437

(6) Chen et al., "Janus-Pro," (2025) https://github.com/deepseek-ai/Janus/blob/main/janus_pro_tech_report.pdf  

Tues., Jan. 7, 2025: Neurodiversity and AI

Introduced by: Gabriella Waters(CEAMLS)

  1. Wu et. al., "Finding My Voice over Zoom: An Autoethnography of Videoconferencing Experience for a Person Who Stutters" (2024) https://dl.acm.org/doi/full/10.1145/3613904.3642746
  2. Liebel et al. "Challenges, Strengths, and Strategies of Software Engineers with ADHD: A Case Study" (2024)

 https://dl.acm.org/doi/pdf/10.1145/3639475.3640107

Tues., Dec. 3, 2024: Marxist perspectives on AI

Introduced by: Kyle Stine (Johns Hopkins University)

  1. Caffentzis, "Why Machines Cannot Create Value" (2013) 
  2. Pasquinelli, The Eye of the Master (2023)

Tues., Nov. 12, 2024: Reasoning and inference in LLMs

Introduced by: Phillip Honenberger (CEAMLS)

  1. Hase, et al., “Fundamental Problems with Model Editing: how Should Rational Belief Revision Work in LLMs?” (2024) https://arxiv.org/abs/2406.19354
  2. Betz, "Probabilistic coherence, logical consistency, and Bayesian learning: Neural language models as epistemic agents" (2023) https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0281372

Tues., Oct. 15, 2024: Generative AI, Labor, and Economy

Introduced by: Larry Liu (CEAMLS)

  1. Drozd et al., “Generative AI: A Turning Point for Labor’s Share?” http://www.lukasz-drozd.com/uploads/4/3/1/8/43183209/eiq124-generative-ai.pdf
  2. Wilmers, “Generative AI and the Future of Inequality” https://mit-genai.pubpub.org/pub/24gsgdjx/release/1
  3. Occhippinti et al., “The Recessionary Pressures of Generative AI: A Threat to Wellbeing” https://arxiv.org/abs/2403.17405

Tues., Oct. 1, 2024: Covertly Racist LLMs

  1. Hofmann et al., “AI Generates Covertly Racist Decisions about People Based on Their Dialect,” Nature (2024)
  2. Vaswani et al., “Attention is All You Need”, ArXiv (2017)

Tues., Sept. 17, 2024: Generative AI and Scientific Integrity”

  1. Blau et al., “Protecting Scientific Integrity in an Age of Generative AI” (2024), Proceedings of the National Academy of Sciences, https://www.pnas.org/doi/10.1073/pnas.2407886121
  2. Conroy, “How ChatGPT and other tools could disrupt scientific publishing” (2024), Nature (3) Pu et al., “ChatGPT” and generative AI are revolutionizing the scientific community: A Janus-faced conundrum” (2024), iMETA

Tues., April 2, 2024: Introduction to Bias Mitigation Techniques

(1) Feldman & Peake, "End-to-End Bias Mitigation: Removing Gender Bias in Deep Learning" (2021) 

(2) Wang & Russokovsky, "Overwriting Pretrained Bias with Finetuning Data" (2023)  

Tues., March 5, 2024: Diffusion Models and Image Generative AI

(1) Robertson, “Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis” (2024), The Verge (https://www.theverge.com/2024/2/21/24079371/google-ai-gemini-generative-inaccurate-historical )

(2) Isaacs-Thomas, “How AI turns text into images” (2023), PBS News Hour (https://www.pbs.org/newshour/science/how-ai-makes-images-based-on-a-few-words)

(3) Bushwick et al., “See how AI generates images from text” (2023),
(https://www.scientificamerican.com/article/see-how-ai-generates-images-from-text/ )

(4) OPTIONAL (for the “deep dive”): Yang et al., “Diffusion Models: A Comprehensive Survey of Methods and Applications” (2023), ACM Computing Surveys (https://www.researchgate.net/publication/374330983_Diffusion_Models_A_Comprehensive_Survey_of_Methods_and_Applications)

Tues., Feb. 20, 2024: Women in AI / Diversity in AI Workforce

Introduced by: Saata Senii (Morgan State)

(1) Kassova, “Where are all the ‘godmothers’ of AI?” (2023) https://www.theguardian.com/global-development/2023/nov/25/where-are-godmothers-of-ai-womens-voices-not-heard-in-tech-sam-altman-openai

(2) McKinsey Report on AI, 2022: The state of AI in 2022 – and a half-decade in review

(3) Bemba, “NYT Missed These 12 Trailblazers: Meet the Women Transforming AI”: 

https://medium.com/womenintechnology/ny-times-missed-these-12-trailblazers-meet-the-women-transforming-ai-ae522f52a8b7

Tues., Feb. 6, 2024: Student data, privacy, and AI in higher ed.

Introduced by: Daniel Brunson (Morgan State, Dept. of Philosophy & Religious Studies)

(1) Mathewson, “He Wanted Privacy. His College Gave Him None.” (2023): https://themarkup.org/machine-learning/2023/11/30/he-wanted-privacy-his-college-gave-him-none

(2) Jones et al., “A Matter of Trust: Higher Education Institutions as Information Fiduciaries…” Journal of the Association for Information Science & Technology (2019)



Thurs., Dec. 7, 2023: Generative AI’s Effects on Labor Markets

Introduced by: Larry Liu

(1) Hui et al., "Short Term Effects of Generative AI on Employment" (2023) https://mail.google.com/mail/u/0?ui=2&ik=29d80365f4&attid=0.1&permmsgid=msg-f:1795271898958467057&th=18ea1647c4a90ff1&view=att&disp=inline&realattid=f_lpijcuy60

(2) Zarifhonarvar, "Economics of ChatGPT" https://mail.google.com/mail/u/0?ui=2&ik=29d80365f4&attid=0.2&permmsgid=msg-f:1795271898958467057&th=18ea1647c4a90ff1&view=att&disp=inline&realattid=f_lpijcxdq1

Thurs., Nov 16, 2023: Indigenous Perspectives and AI

Introduced by: Lara Simmons

(1)https://www.japantimes.co.jp/news/2023/04/10/world/indigenous-language-ai-colonization-worries/ 

(2) https://www.scientificamerican.com/article/ai-can-help-indigenous-people-protect-biodiversit

(3)  https://techpolicy.press/an-indigenous-perspective-on-generative-ai/

Thurs., Nov. 2, 2023: Political Economy of AI

Introduced by: Daniel Brunson(Morgan State, Dept. of Philosophy & Religious Studies)

(1) Meredith Whittaker, "Origin Stories: Plantations, Computers, and Industrial Control"

https://logicmag.io/supa-dupa-skies/origin-stories-plantations-computers-and-industrial-control/

(2) Widder, West, and Whittaker, "Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI" (2023)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4543807

Thurs., Oct. 19, 2023: AI and health equity

Introduced by: Gabriella Waters (CEAMLS)

(1)  Hendricks-Sturrup et al., "Developing Ethics and Equity Principles, Terms, and Engagement Tools to Advance Health Equity and Researcher Diversity in AI/ML…" (2023) 

(2) Supplementary: “An Expert Panel Discussion Embedding Ethics & Equity in AI/ML” Big Data, (2023)

Thurs., Oct. 5, 2023: AI and humor

Introduced by: Daniel Brunson (Morgan State, Dept. of Philosophy & Religious Studies)

 (1) Thomas Winters, "Computers Learning Humor is No Joke" (2021): https://hdsr.mitpress.mit.edu/pub/wi9yky5c/release/3

(2) Kramer, "The Philosophy of Humor: What Makes Something Funny?" (2022): https://1000wordphilosophy.com/2022/11/20/the-philosophy-of-humor/

(3) Anjum & Lieberman, "Exploring Humor in Natural Language Processing" (2023)

Thurs., Sept. 21, 2023: Explainability of AI/ML systems in healthcare contexts

Introduced by: Phillip Honenberger (CEAMLS)

(1) Keller et al., "Augmenting Decision Competence in Healthcare Using AI-based Cognitive Models" (2020)

(2) Byeon, "Advances in Machine Learning and Explainable Artificial Intelligence for Depression Prediction" (2023)

Thurs., Sept. 7, 2023: Gun-detection software in Baltimore schools”; “Operational criteria of consciousness in AI

Introduced by: Gabriella Waters (CEAMLS) and Phillip Honenberger (CEAMLS)

 (1) Wintrode, "Baltimore county schools add gun detection software to 7000 security cameras," Baltimore Banner (2023): 

https://www.thebaltimorebanner.com/education/k-12-schools/baltimore-county-schools-gun-detection-2KQS5MJJSNFO5LRVC74PYTTWWM/?schk=YES&rchk=YES&utm_source=The+Baltimore+Banner&utm_campaign=8548221bf4-NL_ALRT_20230824_1100&utm_medium=email&utm_term=0_-8548221bf4-%5BLIST_EMAIL_ID%5D&mc_cid=8548221bf4&mc_eid=e27161f251

(2) Lenharo, "If AI becomes conscious, here's how researchers will know," Nature (2023): https://www.nature.com/articles/d41586-023-02684-5

(3) Finkel, "If AI becomes conscious, how will we know?" Science (2023): https://www.science.org/content/article/if-ai-becomes-conscious-how-will-we-know

(4) Supplementary: Butlin & Long et al., “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness” (2023): https://arxiv.org/abs/2308.08708

Thurs., Aug. 17, 2023: Big data and health equity

Introduced by: Odia Kane (JHU)

(1) Doerr & Meeder, "Big Health Data Research and Group Harm: the Scope of IRB Review" (2022) 

(2) Tsosie et al., "We Have Gifted Enough: Indigenous Genomic Data Sovereignty in Precision Medicine" (2021) 

Thurs., Aug. 3, 2023: AI and film

Introduced by: Lara Simmons (CEAMLS)

(1) Tong et al., "The Use of Deep Learning and VR Technology in Film and Television Production...", Frontiers in Psychology (2021): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8080441/

(2) Pontefract, "Can Artificial Intelligence Help the Film Industry?" 

Forbes (2023): https://www.forbes.com/sites/danpontefract/2023/04/24/can-artificial-intelligence-help-the-film-industry-it-already-is/?sh=14d1705948a4

(3) Smith, " 'Of course it's disturbing': Will AI change the film industry forever?" Guardian (2023): https://www.theguardian.com/film/2023/mar/23/ai-change-hollywood-film-industry-concern.



Thurs., July 20, 2023: AI ethics frameworks (meta-analysis)”; “analogy between biological and artificial neural networks

Introduced by: Phillip Honenberger (CEAMLS)

(1) Hagendorff, “The Ethics of AI Ethics” (2020)

(2) Macpherson et al., “Natural and Artificial Intelligence” (2021)

Thurs., July 6

Meeting canceled due to schedule conflicts

Thurs., June 15, 2023: Turing test”; “large language models”; “artificial general intelligence (AGI)

Introduced by: Pihlwa Lee (CEAMLS)

(1)  Terry Sejnowski, "Large Language Models and the Reverse Turing Test" (Neural Computation, 2023)



Thurs., June 1, 2023: AI in education; human-AI interaction; research methods for exploring human-AI interaction

Introduced by: Lara Simmons (CEAMLS)

(1) Matt Cronin, “Do advances in AI risk a future of human incompetence?” (The Hill, May 2023)

(2) Hiekkilä, “A chatbot that asks questions could help you spot when it makes no sense”

(MIT Technology Review, April 2023)

(3) Danry et al., “Don’t Just Tell Me, Ask Me: AI Systems that Intelligently Frame Explanation as Questions Improve Human Logical Discernment Accuracy over Causal AI explanations.”

CHI’ 23 (Conference on Human Factors in Computing Systems, April 2023)

(4) Jakesch et al., “Co-Writing with Opinionated Language Models Affects Users’ Views.” CHI’ 23 (Conference on Human Factors in Computing Systems, April 2023)

Thurs., May 18, 2023: Prompt engineering; ChatGPT

Introduced by: Gabriella Waters (CEAMLS) & William Mapp (CEAMLS)

(1) White et al., "A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT" (Arxiv preprint, 2023)

(2) Sorensen, Robinson, Rytting, et al., "An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels" (Arxiv preprint, 2022)

Thurs., May 4, 2023: ethics of predictive AI applications; ethics of AI-informed decision making; ChatGPT; agency

Introduced by: Phillip Honenberger (CEAMLS; Dept. of Philosophy & Religious Studies, Morgan State)

(1) Desai et al., "Against predictive optimization" (2023): https://predictive-optimization.cs.princeton.edu/

(2) Floridi & Chiriatti, "GPT-3: Its Nature, Scope, Limits, and Consequences" (Minds & Machines, 2020):  https://link.springer.com/article/10.1007/s11023-020-09548-1

(3) Floridi, "AI as Agency Without Intelligence" (Philosophy & Technology, 2023): https://link.springer.com/article/10.1007/s13347-023-00621-y

Thurs., April 20, 2023: gating networks; Dynamic mixture of experts models; Wisconsin card sorting task; ANN-brain analogy; ChatGPT; AI “hallucinations”; political reaction to ChatGPT

Introduced by: William Mapp (CEAMLS)

(1) “A modeling framework for adaptive lifelong learning with transfer and savings through gating in the prefrontal cortex” (PNAS, 2020): https://www.pnas.org/doi/10.1073/pnas.2009591117

(2) “ChatGPT Invents Sexual Harassment Scandal” (The Sync, 2023): https://thesyncweekly.com/chatgpt-invents-sexual-harassment-scandal/ 

(3) “Italy bans ChatGPT” (The Sync, 2023):  https://thesyncweekly.com/italy-bans-chatgpt/

Thurs., April 6, 2023: generative adversarial networks (GANs); linking brain activity to visual experience via network models; COMPAS algorithm; fairness and bias in AI applications

Introduced by: Gabriella Waters (CEAMLS) and Phillip Honenberger (CEAMLS; Dept. of Philosophy & Religious Studies, Morgan State)

(1) Takagi, Yu and Shinji Nishimoto, “High-resolution image reconstruction with latent diffusion models from human brain activity” (BioRxiv, 2022)

(2) Larson, Jeff, Surya Mattu, Lauren Kirchner, and Julia Angwin, “How we analyzed the COMPAS recidivism algorithm” (ProPublica, 2016): https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm