SoCJ’s Li to Study Impact of AI-Generated Science and Health Misinformation on Racial Minorities

Misinformation can harm people’s health as they find and act on information designed to trick and mislead them online. Worse still, misinformation generated by artificial intelligence (AI) is becoming more prevalent and harder to detect, exacerbating the negative effects of human-generated misinformation.

And racial groups whose health is already vulnerable, including Black and Hispanic populations, are most likely to be harmed as AI-generated misinformation is created particularly to engage — and mislead — them.

All of this seems clear from a growing body of research. What is less clear is what can be done about it.

, an assistant professor of science communication at the Stony Brook School of Communication and Journalism (SoCJ), aims to try to do just that, thanks to a seed grant from the university’s Office of the Vice President of Research.

“Vulnerable and minority populations too often face disparate and negative health outcomes, and artificial intelligence and misinformation are making things worse,” said Laura Lindenfeld, dean of the SoCJ and executive director of the Alda Center for Communicating Science. “Studies like Wenbo’s will help us to better understand how these individuals and groups understand and interact with online information about health and science, and, as important, to develop strategies and tools that reach and support them. As science communication researchers, we must take an active role in empowering people to make choices that support their wants, needs and goals.”

Li’s study will consist of two phases.

First, he will conduct a survey to better understand the impacts of social and individual influences on respondents’ responses to AI-generated misinformation and their general understanding of scientific and medical issues.

Second, he will devise and test several interventions where people will reflect in different ways on the impact of misinformation. From the second phase, Li says he hopes to learn strategies that may be more effective in helping people recognize potential misinformation and learning to seek accurate information on their own.

“AI-generated misinformation is here to stay, and we already see its potential for adversely impacting the health of vulnerable populations,” said Li. “This seed grant will help us begin to understand, and set the stage for further research, how we can create and share effective tools to empower people to combat misinformation and to facilitate more effective and inclusive science and health communication and outreach.”

The new study builds on Li’s growing body of research that explores the social impacts of artificial intelligence and social and online media.