The Reality of AI

SBU faculty embrace it as a collaborative tool to enhance learning

Artificial intelligence (AI) — the ability of a computer program or machine to learn, react and interact meaningfully — has been grabbing headlines for some time now, and for good reason. It is all around us, visible in ways we never imagined. With the use of AI, computer programs can write stories, create art and music, and converse with humans, but what it can’t do is think for itself — it only can understand information input by humans.

Despite that limitation, it seems like AI is smarter than it appears, “listening” in on our lives in surprising ways, such as through targeted ads on social media feeds and by predicting which words to use when writing. Although AI helps us to complete sentences and can home in on our individual likes and dislikes, there’s no disputing these same remarkable attributes can make some users wary of its abilities, especially considering recent technological breakthroughs such as the introduction of an app that creates lifelike videos in mere minutes.

It can leave one wondering: Just what is AI capable of?

Linda O’Keeffe, Chair of the Department of Art, sees AI as a tool artists can partner with.

At Stony Brook, faculty across the entire campus have embraced AI while also girding for a paradigm shift that brings with it both hope and concerns.

“I think people worry about the wrong things,” said Steven Skiena, distinguished teaching professor in the Department of Computer Science and SUNY Empire Innovation professor and director, AI Institute at Stony Brook. “I’m not particularly worried about the machine becoming sentient and taking over the world and wanting to destroy people. That doesn’t seem to be a component of these models.”

Skiena said the modern approach to AI has had a history of computer science dating back to the early 1950s. What we’re seeing now is the moment of critical mass for which many people, Skiena included, were caught unawares when AI models rapidly made the transition to the next level — and it became clearer how much they could suddenly do.

Nothing to Fear

“There’s always a fear of the unknown when it comes to AI or any technology,” said Susan Brennan, distinguished professor, cognitive science, Department of Psychology. “I remember one of my colleagues doing research on how people reacted to calculators initially, and how no one was going to know how to do math anymore because we’re all just going to use calculators. Now they’re an innocuous part of our everyday life. In the last year or so AI has gotten so into the mainstream that people are saying, ‘Oh my God, this just happened,’ but I’ve been working with AI since the mid-’80s.”

Distinguished Professor Susan Brennan said she and her PhD students will benefit from AI’s powerful statistical tools.

Still, Brennan acknowledges the strong opinions and fear surrounding AI.

“That’s because people don’t understand it,” she said. “What we’re doing here at Stony Brook is not only training the leaders of tomorrow on how to address the problems that AI tools and programs can perpetuate, but also communicating to the public that we can mitigate those fears. I think that the crux of what we do here is that education and training will help address those fears in a positive way.”

Both Skiena and Brennan attribute the enormous and ever-speeding progress we’ve seen in AI to a massive increase in the scale of data and computation.

“Instead of trying to build a system to understand language based on knowing the language you learned in grammar school, for example, it’s essentially given all the text in the world to figure out how language works, and these machine-learning systems have gotten amazingly good,” said Skeina. “That’s what you’re seeing now with ChatGPT.”

ChatGPT is a free AI system based on a large language model. It enables users to refine and steer a conversation toward a desired end, including length, format, style and content. Successive prompts and replies are considered at each conversation stage. ChatGPT’s impressive and far-reaching capabilities, in addition to its ability to “create,” have visibly reopened the age-old discussion of the possibility of human obsolescence; in this case, will writers and artists — or indeed all thinkers — be replaced by intuitive technology? However, some say “not so fast.”

“AI is great at telling us what’s already been written,” said Celia Marshik, dean of the Graduate School and vice provost for graduate and professional education. “What’s the consensus about how we should understand the ending of James Joyce’s Ulysses? Or what’s the consensus about whether you should use the Oxford comma? AI can tell you those things. What AI can’t do is create new knowledge.”

Enhancement or Cheating Device?

Along with ChatGPT comes another age-old question: how to fend off cheating and plagiarism.

“That story has always been part of academia,” said Marshik, who is also a professor of English. “How can the fringe student cheat the system? And when you get into something like writing, it’s a prime target. As a poet, I find it harder to imagine how to use AI. Poetry is so much about knowing the idea you’re thinking about and the emotion and feeling you’re trying to convey. AI can’t give that to you.”

But although Marshik said she doesn’t see AI as a tool for her as an artist, she envisions it as an education tool.

“I feel like I know the things AI would tell me,” she said. “Where I could see it being useful is for people entering the field. I work on early 20th century writers and some of them have had so many books and articles published about them that it’s really difficult for a student to even enter a critical conversation because there might be 8,000 articles on an author. If you’re writing a five-page paper, you just need to know [who the two or three critics are that] have really thought about this subject. AI can help there.”

Skiena said the significant — and improving — capabilities of AI have made people in all disciplines pay closer attention.

Distinguished Teaching Professor Steven Skiena has been working with AI since the ’80s. While not new, its development has now reached a critical mass.

“We’ve been watching these models get better and better,” he added. “But the concern that suddenly AI is going to be writing documents and what my students might turn in as homework…you don’t really see that. But when you’re submitting papers to conferences these days, you have to submit ethics statements. So, there is certainly an increasing awareness of what AI can potentially do.”

Linda O’Keeffe, chair and a professor in the Department of Art, said that part of the fear of AI comes from technology encroaching into creative fields where it’s already hard to make a living.

“As in many fields, when another industry threatens to take away the little bit of work and the little bit of earnings we have, that can immediately put the walls up where you’re going,” she said. “The resources are already limited!”

But O’Keeffe added that artists generally embrace new tools that can potentially enhance their art.

“We fundamentally love new stuff, no matter what medium or field we work in,” said O’Keeffe, who is also an artist. “If someone is a painter and there’s a whole new way to think about color, it’s just a new process. There’ll be artists thinking of how AI could potentially transform their practice, and how it can shape their way of thinking. I don’t think artists are resistant to things that can enhance or excite their practice. And I think this AI technology — like a lot of other technologies — has that potential.”

O’Keeffe spoke about artists who integrate AI into their palette.

“I have friends on Instagram who are educators sharing works that their students are doing in collaboration with AI technology,” she said. “They’re sort of teaching students to engage with the technology as a collaborator, to think about how they can work together to create new ways of seeing the world, new ways of presenting concepts to the world. And I see that there are people making comments like, ‘these are amazing images!’”

A New Way to Think

But are these images produced by students — or images produced by AI?

Celia Marshik, dean of the Graduate School, sees AI as a great education tool but not something that would help writers convey emotion.

“It’s both,” said O’Keeffe. “The students are engaging with this technology. It’s a constant conversation that feeds the program information. It’s not like, ‘this is what I want you to produce’ and it produces it. It doesn’t work like that. It’s ‘how about we think about it this new way? How about you bring this kind of image into it? How about you change the light?’ AI is a tool. It’s a collaboration. It’s just on a different level from the tools and collaborations we’re used to.”

The bigger problem, she said, is not what AI can do, but what it can’t.

“I look at it like I’m educating ChatGPT, not the other way around,” said O’Keeffe. “I’m a feminist theorist, and I explore many areas of my practice and papers and books. I’m well aware of the discourse emerging about how AI technology is shaped by what we put into it. And the information it has access to is creating knowledge that is explicitly shaped by racism or sexism or classism because it’s accessing knowledge resources that are already filled with that stuff, basically the internet. So, I feel it’s my responsibility to introduce it to different forms of knowledge and how to think about that.”

O’Keeffe doesn’t consider AI to be true intelligence.

“It’s not its own independent intelligence; it’s not making art all by itself and going into galleries,” she said. “And I think if it comes to that time, if we ever reach that period where we have true artificial intelligence, it’s not going to be interested in making art or doing anything for us. Why would it? Why would we be an audience a free-thinking AI will be interested in? But for now, we can partner with it. We can collaborate with it, and we can see where those collaborations take us.”

Doing so means acknowledging that AI will be a critical tool for students — and for the rest of us — from this point forward.

“For me, the most exciting thing right now is AI and education and how to prepare students who are in higher education,” said Brennan. “We mainly work with PhD students, but this affects a lot of students at different levels. Our PhD students especially need to be prepared and incorporate AI into their toolkits. We’re all data scientists that are making decisions and discovering things and testing hypotheses based on evidence. And the way we process that evidence is our statistical toolkit. Now we can take advantage of more powerful tools based on deep learning technology.”

AI is already integrated campuswide, Skiena said. “Stony Brook’s Computer Vision Program is among the top 10 in the country. It has about 40 graduate students and is at the cutting edge of all technologies involving vision. This includes medical technologies where we’re looking at tissue samples, for example, and trying to understand and diagnose.” He added, “We also have a large group working in natural language processing, which is in Computer Science and Linguistics. Computational Linguistics — the study of language as a computational problem — is big here right now. One specialty involves psychology and trying to understand and measure mental health.”

Brennan and members of Stony Brook’s BIAS-NRT Program — which uses data science and AI to overcome perpetual societal biases that give some people an advantage and others a disadvantage — are working on something called the Post-Conviction Project, which finds patterns in data with the goal of getting innocent people out of prison.

“We did a more transparent version of machine learning that we hoped to develop into a tool to help organizations that work with poor incarcerated people,” said Brennan. “The fundamental question is: ‘With limited resources, can we help this person or not?’”

To help answer this, the group is using a database of exonerated people to try to provide tools to help them predict success. Brennan said other projects are using AI to research issues such as racial wealth gap, climate justice and negative impacts from face recognition.

Yongjun ‘Josh’ Zhang, an assistant professor in the Department of Sociology and Institute for Advanced Computational Science, has been leveraging AI for a project dedicated to detecting and monitoring anti-Asian hate speech online since the COVID-19 pandemic. With grant funding, he was able to collect and work with a large-scale data set from X, formerly known as Twitter. Zhang said AI has not only transformed computational science, but also computational psychology.

Josh Zhang in the Department of Sociology is using AI to monitor anti-Asian hate speech online.

“We’re using AI to try to detect emotions and sentiments on social media,” he said. “With the large-scale data capabilities we have now, we’re talking about millions and millions of tweets. We’re looking at more than 100 million tweets right now. With these new tools, we can retrain larger models and fine-tune our modules to detect these emotions and sentiments. Moving forward, this work will facilitate the development of computational sociology.”

This is, of course, only the beginning of how AI will help enhance research outcomes. One thing is certain, however: It’s growing up, fast. “AI has a parallel to having kids,” said Skiena. “They get a little bit bigger, they get a little bit smarter and suddenly one day they can do something visible. That’s what we’re seeing with the recent progress of models like ChatGPT. The process has built up over at least a decade, and suddenly it crosses a threshold and does tasks that are really kind of mind-blowing.”

A drawing Linda O’Keeffe made of a branch. Image on the right is the enhanced image of the branch made by O’Keeffe in collaboration with a conversation with the AI feature in Adobe Photoshop.

 

Rob Emproto