Skip to Content
My MSU

Center for Equitable Artificial Intelligence and Machine Learning Systems


Affiliated Research

Research - Current

AI Assistive Comprehension Assessor (AACA)

PI:
Benjamin Hall

Co-PI(s):
Kofi Nyarko

Dept. or Schools:
Dept. of Electrical and Computer Engineering

Affiliation:
Core Research

Project Period:
N/A

Project Description:
The AI Assistive Comprehension Assessor (AACA) is an innovative web-based tool designed to help educators detect and manage the use of large language models (LLMs) by students in completing essay assignments. This tool is essential in promoting academic integrity by distinguishing between students who use AI for legitimate learning purposes and those who may rely on it for dishonest practices.

Functionality and Features:
Essay Submission and Analysis:
Teachers can upload individual student essays or bulk upload an entire class's essays into the AACA platform.
The software analyzes the content of the essays and generates an XML file.

Canvas Integration:
The XML file produced by AACA can be seamlessly integrated into the Canvas learning management system.
Once uploaded, Canvas automatically creates a quiz based on the student's essay. This quiz comprises up to five questions directly derived from the essay content.

Automated Quiz Creation and Grading:
Students are required to complete the quiz, which assesses their understanding and engagement with the essay topic.
Canvas provides automatic grading of the quiz, ensuring a streamlined and efficient evaluation process for educators.

Benefits for Educators:
Efficiency: The ability to upload essays in bulk saves considerable time, allowing professors to manage and assess large volumes of work effortlessly.

Academic Integrity: By creating quizzes based on the submitted essays, the AACA helps in identifying students who may be using AI tools to complete their assignments dishonestly.

Support for Honest Learning: The tool distinguishes between students who genuinely use AI for educational enhancement and those who misuse it, fostering a fair learning environment.

Conclusion:
The AACA offers educators a powerful AI-driven resource to ensure academic honesty and integrity. It provides a balanced approach by not only catching dishonest practices but also supporting students who use AI as a legitimate learning aid. By leveraging the capabilities of the AACA, educators can uphold the standards of academic excellence and fairness in their classrooms.

AI-Driven Autonomous Control Systems for Equitable Decentralized Disaster-Resilient Communication Networks

PI:
Peter Taiwo

Dept. or Schools:
School of Engineering

Affiliation:
Affiliated Faculty

Project Period:
June 2024 - June 2025

Project Description:
Overview
This initiative seeks to harness the power of artificial intelligence to fortify and optimize the resilience and fairness of decentralized communication networks during disasters. By integrating AI for real-time data analysis and multi-agent systems for scenario simulation, this approach enables networks to autonomously perform essential functions such as self-configuration, self-healing, and dynamic rerouting. This technology ensures robust communication capabilities essential for effective emergency management while maintaining interoperability with existing communication infrastructures to facilitate seamless integration and operation.
 
Objectives and Significance
The project aims to develop a scalable, adaptable network infrastructure that can autonomously respond to various disaster scenarios. It focuses on employing machine learning algorithms to predict and mitigate network disruptions before they occur, maintaining continuous connectivity. Rigorous testing under simulated disaster conditions validates the effectiveness of AI-driven interventions, emphasizing real-world applicability and reliability. The project also commits to maintaining high data security and privacy standards, ensuring that sensitive information managed by the systems adheres to stringent privacy regulations.
 
Impacts
The anticipated impacts of this project include creating more resilient communication infrastructures capable of sustaining operational efficiency under extreme conditions. By implementing AI and multi-agent systems, the networks will not only reduce response times but also enable more coordinated disaster recovery efforts. This project is expected to influence policy changes and set new standards for disaster resilience, driving technological innovation that facilitates better preparedness and recovery strategies globally. The flexibility and scalability of the technology ensure it can be adapted to various sizes and types of networks, with the goal of enhancing its utility across diverse geographical and demographic contexts.


BiasWatch

PI:
Chukwuemeka Duru

Co-PI(s):
Kofi Nyarko 

Dept. or Schools:
Dept. of Electrical and Computer Engineering

Affiliation:
Core Research

Project Period:
N/A

Project Description:
Developing an online LLM Bias Cataloging system and analytics tool, the project aims to create a comprehensive platform for bias reporting and analysis in large language models (LLMs). BiasWatch is designed to enable users to document their contrived bias experiences while interacting with or utilizing LLMs, allowing for the filing of bias reports and provision of corresponding bias ratings. To substantiate these reports, users are prompted to submit evidence, which may include textual responses or images.

Furthermore, novel metrics have been devised for computing the average bias of large language models (LLMs), considering factors such as the recency of associated reports and the number of users who found the reports useful. These metrics play a crucial role in offering a nuanced understanding of bias trends within LLMs. BiasWatch enables users to sift through numerous reports linked to specific LLMs, utilizing various tags, rating ranges, or complex filtering options (including temporal or spatial information) available on the platform. The results of these filters are presented to users in tabular and graphical formats, providing immediate insights into the performance of different LLMs based on user-provided criteria.

In addition to browsing reports, registered users can engage with the platform by upvoting reports deemed valuable or by adding relevant tags for enhanced categorization. BiasWatch is poised to evolve into the foremost repository of bias reports on popular LLMs, facilitating continuous monitoring of their impact on daily life and critical infrastructure. Through ongoing development and expansion, BiasWatch aims to remain at the forefront of bias awareness and mitigation efforts in the realm of LLMs.

A Data-Driven Exploration of Socioeconomic Influences on Urban Mobility: Enhancing Gender Equity in Maryland's Transportation Systems

PI:
Mehdi Shokouhian

Co-PI(s):
Zeinab Bandpey

Dept. or Schools:
Dept. of Civil & Environmental Engineering

Affiliation:
Affiliated Faculty

Project Period:
May 2024 - May 2025

Project Description:
Transportation networks are the backbone of urban centers, serving as the vital connection for all forms of activities, whether economic or social, within a bustling city. They play a dual role, influencing and being influenced by urban expansion. By facilitating access to healthcare, education, and employment opportunities, transport systems contribute significantly to the quality of life and personal well-being. This accessibility enhances productivity and drives economic development [1]. The design of cities inherently reflects gender biases, affecting daily mobility in ways that differ markedly between men and women [2]. Women's and girls' movements are often restricted by the fear of physical or sexual violence in public spaces and while using public transit, underscoring a significant barrier to their freedom of movement [3]. Factors such as the placement of bus stops and the adequacy of street lighting can have a substantial impact on the safety and mobility of women. Furthermore, women and girls typically undertake trips that involve multiple stops for household errands and other activities that reflect gender-specific roles [4, 5]. Urban areas frequently have a higher proportion of households led by women, who are more likely to be employed in low-wage or informal sectors than men and often have less access to transportation benefits [6]. Existing research has delved into the intricate relationship between gender and mobility, particularly focusing on how transport mode preferences vary by gender. A notable contribution by Robin Law in 1999 provides a comprehensive review of studies on gender and mobility that date back to the early 1970s, covering a wide array of fields from social sciences to geography and environmental studies [7]. Over the last thirty years, feminist researchers have amassed substantial evidence to support the notion that mobility patterns—such as destinations, speed, and frequency—are significantly influenced by gender [8, 9, 10, 11, 12, 13, 14]. 

This research aims to employ Machine Learning (ML) techniques to analyze data from the American Community Survey (ACS) and Replica, aiming to reveal gender inequalities in commuting patterns in Maryland. It extends its inquiry to a wide array of variables, including demographics such as age, gender, race, ethnic groups, family status; income classes; and transportation factors such as modes of transportation, travel time, mile traveled, distinctions across urban, suburban, and rural neighborhoods; and considerations of ability, including people with disabilities. The primary goal of this research is to develop a data-driven predictive model that analyzes gender-based transportation equity in Maryland. This objective will be achieved through the comprehensive collection and analysis of census and Replica data tailored to the region. Once data is collected, it will then be segmented by gender—male, female, and non- binary categories—to identify and quantify any existing inequalities in commuting patterns. Gini coefficient will be used to calculate equity indicines from the gathered data to evaluate the equality in the distribution of transportation access and costs among the different gender groups. Achieving lower values in these indices will be a quantifiable indicator of a more equitable transportation system. In the next step, the study will examine how these inequalities impact access to crucial services such as employment, healthcare, and education.

This research makes a significant impact by providing a comprehensive analysis of gender-based transportation equity in Maryland, utilizing advanced Machine Learning (ML) techniques. Its effectiveness pivots on its dual ability to examine current inequities in commuting and forecast future shifts, laying a solid, data-driven groundwork for actionable improvements in policy and infrastructure. Specifically, the ML approach allows for an examination of complex commuting data, uncovering patterns not easily detected through traditional methods. This capability is crucial for predicting how policy alterations might influence commuting behaviors, offering targeted insights for urban planners and policymakers aimed at fixing gender Inequalities.

References:  
1. Kunieda, M., & Gauthier, A. (2007). Gender and urban transport: fashionable and affordable. Sustainable transport: A sourcebook for police makers in developing cities. Eschborn: GTZ.
2. Gauvin, L., Tizzoni, M., Piaggesi, S., et al. (2020). Gender gaps in urban mobility. Humanities and Social Sciences Communications, 7(11). https://doi.org/10.1057/s41599-020-0500-x
3. Loukaitou-Sideris, A. (2014). Fear and safety in transit environments from the women’s perspective. Security Journal, 27(2), 242–256.
4. Brown, D., McGranahan, G., & Dodman, D. (2014). Urban informality and building a more inclusive, resilient, and green economy. International Institute for Environment and Development (IIED), London.
5. Ng, W.-S., & Acker, A. (2018). Understanding urban travel behaviour by gender for efficient and equitable transport policies. International Transport Forum Discussion Paper.
6. Tacoli, C. (2012). Urbanization, gender and urban poverty: Paid work and unpaid carework in the city. International Institute for Environment and Development (IIED).
7. Law, R. (1999). Beyond ‘women and transport’: towards new geographies of gender and daily mobility. Progress in Human Geography, 23(4), 567-588.
8. Cresswell, T., & Uteng, T. P. (2016). Gendered mobilities: Towards an holistic understanding. In Gendered Mobilities (pp. 15–26). Routledge.
9. Lucas, K., Stokes, G., Bastiaanssen, J., & Burkinshaw, J. (2019). Inequalities in mobility and access in the UK transport system. Future of Mobility: Evidence Review, Government Office for Science.
10. Bassolas, A., Barbosa-Filho, H., Dickinson, B., Dotiwalla, X., Eastham, P., Gallotti, R., Ghoshal, G., Gipson, B., Hazarie, S. A., Kautz, H., Kucuktunc, O., Lieber, A., Sadilek, A., & Ramasco, J. J. (2019). Hierarchical organization of urban mobility and its connection with city livability. Nature Communications, 10(1), 4817. https://doi.org/10.1038/s41467-019-12809-y
11. Blondel, V. D., Decuyper, A., & Krings, G. (2015). A survey of results on mobile phone datasets analysis. EPJ Data Science, 4(1), 10.
12. Alexander, L., Jiang, S., Murga, M., & González, M. C. (2015). Origin–destination trips by purpose and time of day inferred from mobile phone data. Transportation Research Part C: Emerging Technologies, 58, 240-250.
13. Schneider, C. M., Belik, V., Couronné, T., Smoreda, Z., & González, M. C. (2013). Unravelling daily human mobility motifs. Journal of The Royal Society Interface, 10(84), 20130246.
14. Loukaitou-Sideris, A. (2020). A gendered view of mobility and transport. Engendering cities: designing sustainable urban spaces for all, 2. 
 

Empowering Blockchain Applications with Adaptive Reasoning and AI: Towards Advanced Capabilities, Security and Decision-Making

PI:
Peter Taiwo

Co-PI(s):

Dept. or Schools:
Dept. of Electrical and Computer Engineering

Affiliation:
Affiliated Faculty

Project Period:
June 2024 - June 2025

Project Description: 
Overview
The integration of blockchain technology is significantly improving data management reliability and transparency in sectors such as healthcare, finance, energy, and logistics. However, blockchain also introduces vulnerabilities that could undermine the integrity of critical infrastructures and transactions, exacerbated by the rise of deepfakes which threaten digital authenticity. Addressing these concerns, recent developments in AI-enabled decentralized frameworks underscore the importance of employing blockchain's smart contracts for robust data handling to improve reliability. Our research focuses on strengthening these digital transaction tools by incorporating adaptive reasoning for dynamic decision-making and addressing security issues like reentrancy attacks and unauthorized access. By developing a scalable AI-driven security model applicable across various sectors, this initiative aims to fortify foundational technologies and shape AI policies for more secure digital environments, thereby broadening blockchain’s utility.

Objectives and Significance
We aim to develop robust AI-driven models that leverage blockchain technology to support decentralized operations and secure digital transactions. These models are designed to improve anomaly detection and risk mitigation, facilitating dynamic decision-making across various digital environments. This includes implementing machine learning techniques and multi-criteria decision-making frameworks to boost security and operational transparency in financial services and public electric power systems.

Impacts
By improving the security and functionality of smart contracts, particularly within public electric power systems and financial services, we expect to drive technological adoption and influence policy standards. The anticipated benefits include more resilient public infrastructure systems, reduced financial transaction risk, and a sustainable energy use framework. Our work will establish new benchmarks for secure and efficient blockchain applications, ultimately fostering trust and broadening the technology's utility across industries.

Excellence in Research: Building an Equitable and Sustainable Logistics System in Rural Areas with Drone

PI:
Ziping Wang, Information Systems

Co-PI(s):
Kofi Nyarko, Electrical Engineering
Xiazheng He

Dept. or Schools:
INNS
Electrical & Computer Engineering

Total Funding:
$423,102

External Funding Agency:
NSF

Affiliation:
Affiliated Faculty

Project Period:
September 2022 - August 2025

Exploring Algorithmic Bias in Conversational AI

PI:
Naja Mack

Co-PI(s):

Dept. or Schools:
Dept. of Computer Science

Affiliation:
Affiliated Faculty

Project Period:
November 2022 - November 2025

Investigating Algorithmic Bias in Virtual Reality

PI:
Naja Mack

Co-PI(s):

Dept. or Schools:
Dept. of Computer Science

Affiliation:
Affiliated Faculty

Project Period:
August 2023 - July 2024 


Long-Term, High- Resolution Urban Aerosol Database for Research, Education and Outreach

PI:
Xiaowen Li

Co-PI(s):
Kofi Nyarko

Dept. or Schools:
School of Computer, Math & Natural Sciences

Affiliation:
Core Research

Project Period:
April 2023 - March 2026

Project Description:
The project combines the AI/ML expertise and aerosol science research capability at Morgan State University (the lead institution) with the collaboration and support from two local universities, and subject matter experts at NASA/GSFC, to produce a targeted, no-gap, longterm, high-resolution, open-access, and user-friendly database of both the Aerosol Optical Depths and PM2.5 surface concentrations, focusing on the Baltimore-Washington area. We will use innovative data analysis algorithms, including artificial intelligence and machine learning methods, to stitch different data sources together through retrieval algorithms with quantitative error analysis. In addition to enhancing aerosol scientific research, the database will also be used in both classroom teaching and scientific outreach, accompanied by online tools and educational materials.

A Methodology for the Development of Cognitive Twins to Predict Behaviors & Bias

PI:
Gabriella Waters

Co-PI(s):
Justin Bonny
Kofi Nyarko

Dept. or Schools:
Dept. of Psychology

Affiliation:
Core Research

Project Period:
November 2022 - November 2025

Project Description:
Introduction
Predicting behaviors and bias is an area of great importance as we continue to observe negative health outcomes for certain populations, lethal interactions with law enforcement, inequitable punishments in educational settings, the performance of soldiers, and more. The ability to predict and potentially mitigate the effects of bias and other behaviors has far-reaching impacts on the way our society functions, and the overall safety of every citizen. This research seeks to develop machine learning models that will be trained on datasets from various industries to mimic the choices and predict the behavior and/or potential bias of individuals.

This research requires collaboration from two departments at the university, Computer Science and Psychometrics to be successful. Cognitive science/Psychometrics and computer science/engineering are the vital components necessary to create the appropriate behavioral assessment tools, develop the correct ML algorithms, and deploy and test the cognitive twins created from the datasets.

Problem Statement
Industries as a whole have agreed that bias is not desirable and that the ability to predict behaviors is valuable. Many organizations employ personality tests and other types of behavioral assessments to determine how a team member will perform, or to make assumptions on what to expect when interacting with them. These assessments are typically completed with responses that participants may feel are appropriate and are not necessarily accurate representations of a person’s actual beliefs.

How can we reveal the most authentic behavior profile of a person?
How can we target bias and predict it?
What are the limitations of current technology and behavioral assessments?

Objective
The long-term goal of this research is to develop cognitive twins that predict individual behaviors and biases. These cognitive twins will be trained to think like and respond as the individuals they were trained on. The inference process of ML models will be visualized for explainability, debugging/improvements, comparison & selection, and teaching concepts. Model architecture, learned parameters, and model metrics are the main areas of focus for visualization both during and after training. The resultant twins will be tested in various scenarios and their inferences and performance will be analyzed. Model training and model inference will guide refinements as appropriate.

NRT AI: Artificial intelligence for Changing Climate and Environmental SuStainability (ACCESS)

PI:
Samendra Sherchan

Co-PI(s):
Chunlei Fan
Kofi Nyarko
Md Mahmudur Rahman
Donghee Kang

Dept. or Schools:
School of Computer, Mathematical and Natural Sciences

Total Funding:
$2,995,210

Funding Agency:
NSF NRT (NSF Research Trainee Program)

Affiliation:
Affiliated Faculty

Project Period:
March 2023 - June 2028

Project Description:
Climate change is a pressing issue with major implications for societal well-being, particularly for disadvantaged communities. Climate-related extreme events such as hurricanes, flooding, drought, heatwaves, wildfires are escalating around the world. Machine-learning (ML) based Artificial-Intelligence (AI) approaches have shown great promise in reducing and responding to changing climate. However, climate change research is fractionated in diverse disciplines (computer science, data science, AI, geosciences, environmental science, and engineering). This delays the progress towards a better understanding of the impacts of climate change and ML/AI solutions. This National Science Foundation Research Traineeship award to the Morgan State University will provide substantive, hands-on research experience for students from underrepresented minority populations who can tackle grand environmental challenges using interdisciplinary methods. The traineeship will prepare 50 PhD students including 25 funded trainees from diverse fields (bio-environmental science, computer science, civil engineering, mathematics, electrical and computer engineering) with technical, dynamic interdisciplinary and professional skills to responsibly solve grand climate change challenges.

The traineeship program places a strong emphasis on a convergence research approach to climate change, which is a pressing 21st century challenge. Major research efforts will focus on three areas: a) AI for water reuse; b) emerging contaminants and ML/AI prediction; and c) disease ecology, climate change and AI. The program consists of an interdisciplinary team of environmental chemists, environmental scientists, computer scientists, engineers to focus on research, educational, and career development activities. Over a five-year period, the career and scientific activities will include: a) mentored research thesis; b) advanced experimental courses; c) a series of professional workshops on leadership, scientific ethic, and science communication skills; d) domestic internship opportunities; e) summer workshops on changing climate and solutions; and f) international research internships. Trainees will work in teams to solve real-world environmental challenges, and a seminar with invited distinguished speakers and professional development activities will all help to foster a friendly and collaborative inclusive positive learning environment. The outcome will be a cohort of students from groups historically underrepresented in STEM fields with a strong multidisciplinary background and understanding of how AI can provide solutions for changing climate, environmental pollution, and water quality management. The goal is to provide a rewarding opportunity for all trainees to conduct novel, hands-on citizen science research at Morgan State under the guidance of diverse faculty and postdocs. A second major goal is to increase the participation of students from minority groups and stimulate their interest to pursue future careers in the STEM fields.




The Path to Absolution - Neuroscience and Recidivism

PI:
Micah Brown

Co-PI(s):

Dept. or Schools:
External

Affiliation:
External collaborator

Project Period:
N/A


Physics-informed neural networks based on fixed-stress splitting iterative method for solving poroelastic model

PI:
Mingchao Cai

Co-PIs:

Dept. or Schools:
School of Engineering

Affiliation:
Affiliated Faculty

Project Period:
March 2023 - 

Deliverables Completed:
Publications:
Physics-Informed Neural Networks and Fixed-Stress Splitting For Biot's Model Solution 

Using Conversational AI to Assist Technical Interview Preparation

PI:
Edward Dillon

Co-PI(s):

Dept. or Schools:
Dept. of Computer Science

Affiliation:
Affiliated Faculty

Project Period:
January 2024 - January 2025 


Research - Completed 2024

AI/ML Student and Teacher Enrichment Program: (STEP) : A Secondary and Undergraduate STEAM Project

PI:
Cecilia Wright Brown

Co-PI(s):
Kevin Peters

Dept. or Schools:
Civil Engineering and Center for Excellence in Mathematics and Science Education (CEMSE)

Affiliation:
Affiliated Faculty 

Project Period:
March 2023 - February 2024


Ethical Considerations of AI-Driven Decision-Making Systems on Marginalized Communities

PI:
Dawn Thurman

Co-PI(s):
Rhonda Wells-Wilbon

Dept. or Schools:
School of Social Work

Affiliation:
Affiliated Faculty

Project Period:
March 2023 - February 2024

Exploring the Ethical implications of Artificial Intelligence in Social Work Education

PI:
Dawn Thurman

Co-PI(s):
Rhonda Wells-Wilbon

Dept. or Schools:
School of Social Work

Affiliation:
Affiliated Faculty

Project Period:
March 2023 - February 2024

Fairness in and beyond algorithms: a science studies approach to fair ML

PI:
Phillip Honenberger

Co-PI(s):

Dept. or Schools:
Dept. of Philosophy & Religious Studies

Affiliation:
Affiliated Faculty

Project Description:
Project Motivation:
“AI Ethics” is now a major research area, linking STEM, humanities, and social science researchers, as well as (in aspiration at least) policy makers and the wider public. Despite some commonality of concern, however, approaches to AI ethics often exhibit substantial differences along disciplinary lines. The topic of algorithmic justice – equity, fairness, bias, and the like – is an instructive case. Some researchers pursue algorithmic methods for reducing bias and improving fairness (recounted in Barocas et al. 2019), or wrestle with quantitatively-expressible trade-offs between different criteria of fairness (Kleinberg 2016) or fairness in short-term versus long-term applications (Liu et al. 2018). Others, however, argue that efforts to achieve fairness primarily through adjustments to algorithms themselves can ignore, obfuscate, or even perpetuate social factors responsible for unfairness (Fazelpour & Lipton 2020). Relatedly, some call for a decentering from emphasis on algorithms within AI ethics in favor of a richer social and experiential contextualization of AI/ML and its applications (Birhane et al. 2022).

Within this framework of debate, the methodological resources of science studies – including under that heading the traditions of history and philosophy of science (HPS) and science, technology, and society (STS), among others – offer distinct advantages. Since its inception with figures like G. Sarton, A. Koyré, R. Merton, L. Fleck and others, science studies researchers have adopted a dual perspective: (1) internalist: close and sympathetic study of technical details of the sciences they seek to understand and illuminate; and (2) externalist: a variety of philosophical, historical, sociological, and interpretive lenses through which such details can be seen in their larger significance, including their connections (both as effect and cause) with social factors, events, and processes. Science studies researchers often (if controversially) take up the challenge of integrating these potentially opposed perspectives.

In this project, the PI, an experienced science-studies researcher with a background in computer-assisted data analysis, will engage the topic of fairness and bias in ML algorithms from just such a dual internalist-and-externalist perspective (captured in the catchphrase “in and beyond algorithms”), seeking an original and integrated interpretive standpoint.

If the discussion of bias and fairness in AI ethics is to avoid a methodological sundering, such that social scientists, humanists, and ML researchers cease to speak the same language or seek answers to their questions in a way that can provide consistent policy guidance, then some such common standpoint or methodological framework as that offered by science studies must be cultivated. This project establishes and builds on this integrative standpoint through three mutually-reinforcing components: publication, teaching, and research community (described further in next section). 

Deliverables Projected:
(1) A research paper on bias and fairness in ML that showcases an “in-and-beyond algorithms” approach to the topic. The central anticipated feature of the resulting paper is engagement with both the technical aspects of ML design processes and specific algorithms, on the one hand, and the personal and social factors that have informed these and are affected by them, on the other (as, for instance, in Ensmenger 2012).

(2) Development (in Fall 2023) and pilot teaching (in Spring 2024) of a new lower-division undergraduate course at Morgan State that focuses on AI, Machine Learning, and Data Science literacy – that is, that provides an introduction and overview of these topics for non-specialists, including critical and informed discussion of their more controversial aspects. The focus of the course will be twofold: (i) understanding the contemporary capabilities and functioning of AI/ML/data science at a technical level (albeit rudimentary and without prerequisites) through learning the history of these technologies; and (ii) critically engaging with the ethical and political ramifications and risks of these technologies, including issues of bias, fairness, responsibility, safety, privacy, transparency, and accountability.

(3) A working group on “humanities and social science of AI” that will meet twice monthly, in hybrid format (Zoom and in-person), to discuss recent papers on the ethics, epistemology, sociology, and politics of AI/ML. A special emphasis will be placed on simultaneously engaging technical details of specific algorithms or processes, and the broader social, political, and interpretive contextualization available from humanities and social science methods. A major aim of the group will be to draw participants from STEM as well as humanities and social science departments, facilitating cross-disciplinary learning. 

PI Biography:
Phillip Honenberger holds a PhD in Philosophy from Temple University. His work has appeared in Studies in History and Philosophy of Science, Biology & Philosophy, and Synthese, among other forums. His research has been funded by the National Science Foundation and the Consortium for History of Science, Technology, and Medicine, among others. He has also worked as MySQL database designer for several data analysis projects.

Works cited:
Barocas, S., M. Hardt, & A. Narayan. 2019. Fairness & Machine Learning. https://fairmlbook.org/
Birhane, A. et al., 2022. “The Forgotten Margins of AI Ethics.” 2022 ACM Conference on Fairness, Accountability and Transparency (FAccT ’22).
Chouldechova, A. 2017. “Fair prediction with disparate impact.”
Ensmenger, N. 2012. “Is chess the drosophila of AI?: A social history of an algorithm.” Social Studies of Science 42 (1): 5-30.
Fazelpour, S. and Lipton, Z. 2020. “Algorithmic fairness from a non-ideal perspective.” AIES 2020.
Kleinberg, J., S. Mullainathan, M. Raghavan. 2016. “Inherent Trade-offs in the Fair Determination of Risk Scores.”
Liu, L. et al. 2018. “Delayed impact of fair machine learning.” Proceedings of the 25th International Conference on Machine Learning.

Deliverables Completed:
(1) Honenberger, Ola, Mapp, and Lee, "Effects of Matching on Evaluation of Accuracy, Fairness, and Fairness Impossibility in AI-ML Systems," Proceedings of FLAIRS-37 (2024)
(2) Waters, Mapp, and Honenberger, "Decisional Value Scores: A New Family of Metrics for AI-ML" (AI & Ethics, 2024)
(3) Honenberger, "Fairness Impossibility in AI-ML Systems: An Integrative Ethics Approach," forthcoming in V.C. Mueller et al. (eds.), Philosophy of AI: the State of the Art (Springer, 2024)
(4) AIM-LIFT Reading Group

Testing Claims using Mechanisms for Trustworthy AI Development

PI:
Mulugeta Dugda

Co-PI(s):

Dept. or Schools:
Dept of Electrical and Computer Engineering

Affiliation:
Affiliated Faculty 

Project Period:
March 2023 - February 2024

 

Research - Completed 2023

AI and Physics-informed Machine Learning Applications in Communication Systems

PI:
Arlene Cole-Rhodes, Electrical Engineering

Co-PI(s):

Dept. or Schools:
School of Engineering

Affiliation:
Affiliated Faculty

Project Period:
November 2022 - Novemeber 2023

 

AI for Personalized Medicine

PI:
Fahmi Khalifa

Co-PI(s):

Dept. or Schools:
Dept. of Electrical & Computer Engineering, School of Engineering

Affiliation:
Affiliated Faculty

Project Period:
November 2022 - Novemeber 2023

Project Description:
Recent advances in artificial intelligence (AI) have significantly impacted various fields, especially medical image analysis for patient care. State-of-the-art (SOTA) AI tools analyze, fuse, and integrate medical images, data, and biomarkers to assess organ function. The need for physicians to provide patients with meaningful information about AI-rendered decisions has led to the development of explainable AI. This approach aims to improve understanding, justify decisions, introduce trust, and reduce bias. Trustworthy and explainable AI is emerging as a promising field for high-quality healthcare, offering human-comprehensible solutions for disease diagnosis, predictions, and recommended actions.

The primary goal of this project is to establish a multidisciplinary research program that integrates AI and Big Data in medicine to enhance research and workforce capabilities. Objectives include improving trust and reducing analysis bias, stimulating system design discussions, evaluating novel explainable AI for better disease diagnostics and prognostics, and enhancing research capacity at Morgan State University (MSU). The program also aims to encourage underrepresented students in STEM to engage in biomedical research addressing health disparities and minority health, and to prepare the next generation of minority researchers for AI/ML research.

The project seeks to translate SOTA AI/ML into practice through various applications, including AI Big Data for personalized medicine (PM). Personalized treatments based on individual medical data can reveal appropriate intervention targets and strategies, improving wellness and reducing healthcare costs. For example, AI can enhance the grading of age-related macular degeneration (AMD), a major cause of blindness in older adults. An interpretable diagnostic AI system could improve early AMD diagnosis, assess disease progression, and predict AMD advancement, enabling early intervention and better disease management. Another application is breast cancer diagnosis. Namely, to identify women at high risk, particularly those in underrepresented populations, and guide personalized screening, thereby increasing overall accuracy and helping to address health disparities and mitigate bias, in BC biopsy recommendations and outcomes.

Algorithmic bias detection and fairness benchmarking for cloud-based AI and Machine Learning systems

PI:
Peter Taiwo, Electrical & Computer Engineering

Co-PI(s):

Dept. or Schools:
School of Engineering

Affiliation:
Affiliated Faculty

Project Description:
Overview:
Algorithmic bias in cloud-based AI applications is not just a theoretical concern but a significant challenge across diverse applications, including facial recognition, financial decision-making, public resource allocation, and medical diagnostics. This project is dedicated to developing methods to detect and accurately identify and address these biases. By employing diverse analytical tools, we aim to assess and mitigate disparities within these AI systems, thereby ensuring equitable and transparent decisions. Our research also aims to combine comprehensive fairness benchmarks with innovative techniques such as model reprogramming to enhance AI functionality where direct model access is limited, making this work a crucial step in the fight against algorithmic bias.

Objectives and Significance:
Our primary objective is to standardize and refine methods for detecting and then identifying algorithmic biases across various AI systems, promoting consistency in fairness evaluations. By focusing on the detection of biased models and subsequently analyzing their specific characteristics, we aim to explore model reprogramming as a method to remotely modify these models, enhancing their fairness. This strategic approach strengthens the integrity of AI systems and meets emerging regulatory demands for transparency and accountability in automated decision-making.

Impacts:
Establishing industry standards for detecting and addressing algorithmic biases will fundamentally advance ethical AI practices. These efforts will advance the development of AI systems that operate equitably across a wide range of applications, thereby reducing biases and enhancing societal trust in technology. These initiatives are expected to influence policy, set standards, and provide educational resources and tools that can be used by data scientists and developers to build more equitable AI systems.

Building for Health Equity through Artificial Intelligence and Machine Learning at Morgan State University

PI:
Kim Sydnor

Co-PI(s):
Kofi Nyarko

Dept. or Schools:
School of Community Health & Policy
Dept. of Electrical & Computer Engineering

Total Funding:
$204,830

Agency:
NIH

Affiliation:
Affiliated Faculty

Project Period:
August 2022 - September 2023

Characterization of health disparities in African ancestry and reduction of algorithmic bias

PI:
Pilhwa Lee, Mathematics

Co-PI(s):
Daniel Brunson, Philosophy & Religious Studies
Kofi Nyarko, Electrical Engineering

Dept. or Schools:
School of Computer, Mathematical & Natural Sciences

Total Funding:
$300,939

Agency:
NIH

Affiliation:
Affiliated Faculty

Project Period:
July 2022 - June 2023


Identification of Data and Algorithmic Bias in ML

PI:
Onyema Osuagwu

Co-PI(s):

Dept. or Schools:
Dept. of Electrical and Computer Engineering

Affiliation:
Affiliated Faculty

Project Period:
November 2022 - November 2023