Student report: Benefits and risks of using AI for advocacy

Artificial Intelligence (AI) tools like ChatGPT and Google Gemini are transforming how content is created, and creating new problems around privacy and truth. At the same time, these tools can be incredibly useful in compiling information and drafting messages.

A group of Stony Brook students this spring took a close look at potential benefits and risks of using AI tools, particularly as they could enhance or hinder advocacy efforts for social causes. The students created a guide for advocacy groups that will help those groups consider how, when or if to use AI in their work.

“AI tools can help advocates in a variety of ways, whether editing their writing, generating images for social media posts, serving as a partner for brainstorming or a coach for public speaking, and even as an analyst of huge datasets about pressing societal issues,” said Matthew Salzano, course instructor and Inclusion, Equity, Diversity and Access (IDEA) Fellow in Ethical AI at the School of Communication and Journalism and the College of Arts and Sciences Program in Writing and Rhetoric. “But, as with introducing any new technology, these implementations can bring risks like shifting user priorities, breaking trust with audiences, or even contradicting advocates’ values.”

The report, “AI and Advocacy: Maximizing potential, minimizing risk,” was created by Salzano’s Writing 302 seminar course that explores different historical, political or social topics each semester. The course is an elective in the writing and rhetoric minor, and is open to students from across the university.

Matthew Salzano

Matthew Salzano

In the nearly 40-page report, students discussed some of the possible benefits of using AI tools for advocacy work including expanding outreach to different populations, facilitating information gathering and analysis, and improving the training of different AI tools. They also offered cautions about the risks of damaging credibility and trust in individuals and organizations, of violating privacy and copyright laws, and of the larger social costs of artificial intelligence, including potential job losses and the environmental costs of AI supercomputers’ vast energy consumption

“Generative AI tools like ChatGPT are already changing the information ecosystem, and content creators of all kinds need to think carefully about how and when to use them,” said Laura Lindenfeld, dean of the SoCJ and executive director of the Alda Center for Communicating Science. “This technology is only going to become better and more pervasive, and I applaud Matthew and his students for creating such a thoughtful examination of the benefits and risks they pose.”

“The Program in Writing and Rhetoric is thrilled to have Matthew as an IDEA Fellow focusing on the latest in AI communications,” said Peter Khost, director of the Program in Writing and Rhetoric. “In this WRT 302 class, particularly, Matthew has demystified the wide spectrum of uses and abuses that AI presents, placing our students at the forefront of navigating the future in AI.”

 

Read more on SBU News.