Professors Xiaojun Bi and IV Ramakrishnan from the Department of Computer Science received a $1.3M R01 award from the National Institutes of Health’s National Eye Institute (NEI) for the project, Intelligent Text Input and Editing Methods on Smartphones for Blind Users.
How many times do you wonder if the photo you just clicked on your phone is safe? Are you sure it will not be seen by anyone? Or that it won’t be used by Snapchat or Meta or Midjourney to train their AI models to do better? The latest social media trends populating our feeds today are images and videos edited by these AI tools, which, when they ask users to sign over their rights, create a disturbing sense of mistrust, begging the question — Have we given up on privacy in the name of progress?
How often do you wonder where your day went, if it was as productive as you wanted it to be, and if not, what disrupted your schedule? For over a decade, people have been using wearable devices, like a smartwatch, to count how many steps they took, measure their heart rate, and track their sleep patterns. But what if you could also check how much time you spent cooking dinner, ironing your clothes, or playing the guitar?
A Stony Brook University study suggests that on average, U.S. adults have gained confidence in the capabilities of AI and grown increasingly opposed to extending human rights to advanced AI systems.
spotlightimage

Spotlight Link
Part ten of our AI Researcher Profile series invites Professor Dimitris Samaras, SUNY Empire Innovation Professor and Director of the Computer Vision Lab at the Department of Computer Science, Stony Brook University, to discuss his research interests and knowledge surrounding human behavior and its role in artificial intelligence.
Conversational AI has come a long way from the basic chatbots that provided scripted responses, only to cause inconvenience and later connect you with a live agent. Today, the technology has evolved tremendously, owing to the advent of neural networks, the transformer model, and OpenAI’s GPT-4. These complex systems, which have found significant usage across several industries, including education, healthcare, finance, and voice assistant technologies, among others, are becoming increasingly helpful in everyday life.
Misinformation can harm people’s health as they find and act on information designed to trick and mislead them online. Worse still, misinformation generated by artificial intelligence (AI) is becoming more prevalent and harder to detect, exacerbating the negative effects of human-generated misinformation.
And racial groups whose health is already vulnerable, including Black and Hispanic populations, are most likely to be harmed as AI-generated misinformation is created particularly to engage — and mislead — them.
All of this seems clear from a growing body of research. What is less clear is what can be done about it.
Stony Brook researchers collaborated with academic centers and AI labs from around the world to advance machine learning, robotics, and computer vision. Their upcoming research is being presented at the 37th edition of the Conference on Neural Information Processing Systems, or NeurIPS—the most cited AI conference in the world.
The conference, which is being held in New Orleans from Sunday, Dec 10 through Saturday, Dec 16, is a multi-track interdisciplinary annual meeting including invited talks, demonstrations, symposia, and oral and poster presentations of refereed papers, creating space for a less formal setting for the exchange of ideas.