AI Research Focuses on Detecting Propaganda and Fallacies in Digital Media

Stony Brook researchers win seed grant to help artificial intelligence better detect false arguments and subtle persuasion techniques in digital media.

Banerjee and Li

Ritwik Banerjee and Ruobing Li

Stony Brook, NY, Apr 28, 2025 — Misinformation spreads rapidly on the internet, often fueled by emotional appeals and flawed reasoning. We’ve all encountered content that provokes anger, leads to heated debates, or appears too sensational to be true. These posts and comments usually rely on persuasive techniques, like emotional manipulation or logical fallacies, including “whataboutism” (deflecting criticism by accusing others of the same behavior) or the “appeal to the majority” (suggesting something is true simply because it is widely believed). Such tactics complicate public discourse, making it more challenging to distinguish fact from fiction.

A team of researchers at Stony Brook, led by research assistant professor Ritwik Banerjee, Department of Computer Science, is exploring how AI can be used to detect these manipulative techniques. While artificial intelligence has made significant strides in tasks like language generation and fact-checking, it still struggles to identify these complex patterns of persuasion and flawed reasoning. For instance, it would be incredibly difficult for AI to understand when an argument distracts from the main issue or relies on emotional appeals rather than logical reasoning.

Banerjee

The team is addressing this gap by using a multi-step method to dive deep into how AI, specifically Large Language Models (LLMs), function. Banerjee said, “We’ll begin by analyzing the existing models, studying how AI learns language, and identifying the parts it tends to overlook.” By exploring attention mechanisms in AI systems, the team will uncover which areas of a text the AI overlooks, specifically when trying to detect persuasion techniques. This detailed analysis will allow them to build more effective AI models capable of spotting subtle propaganda and reasoning flaws.

A major challenge in this work is that misinformation often relies on context, such as cultural or social cues, that current models fail to fully understand. To overcome this, Banerjee’s team is expanding the scope of data they use for training AI models. Instead of focusing only on a single piece of text, they will integrate broader cross-document contexts that capture multiple perspectives and richer social context. Their approach aims to improve the model’s ability to grasp the nuances of persuasive communication across different platforms and cultures.

In addition to improving the AI model itself, the researchers are committed to raising awareness about these issues through seminars and educational initiatives focused on media literacy. Associate Professor Ruobing Li, School of Communication and Journalism, said, “This research is about more than just advancing AI technology; it’s about creating tools that empower individuals to navigate the complex landscape of digital media.” Their goal is to enhance the ability of AI to detect harmful content, ultimately contributing to a healthier and more informed online environment. By improving how AI identifies misinformation, the team hopes to facilitate more meaningful public discourse, grounded in truth and understanding.

Alongside refining AI’s detection capabilities, Banerjee and Li are also focused on integrating their findings into educational frameworks. They plan on using their insights to help students, journalists, and the public develop a more critical understanding of misinformation. Incorporating these lessons into curricula and outreach programs will equip individuals to use AI tools to spot misinformation, whether it appears as a social media post, a viral meme, or a sensational headline.

Interim Director Steve Skiena, AI Innovation Institute, added, “The potential impact of Banerjee’s research is already evident. It could significantly improve the way we approach misinformation detection and media literacy.” By advancing AI’s ability to identify misleading content, the team at Stony Brook University is working to ensure that these technologies can be used to combat the spread of harmful content and contribute to a more informed society.

Communications Assistant
Ankita Nagpal

 

News Author

Ankita Nagpal