Abstract: As intelligent systems become more integrated into human environments, fostering trustworthy human-AI collaboration presents a pressing challenge. In this talk, I examine the interplay between an agent's performance and social dynamics in shaping trust in human-AI interactions. My approach combines testbed development, behavioral prototyping, and user study design to create controlled experimental setups that capture real-world interaction complexities, such as ambiguity, multi-agent dynamics, and conflicting goals.

I illustrate this with a recent VR study on multi-user interaction with an autonomous vehicle (AV). Moving beyond dyadic interactions, the study probes human perspectives from the roles of a pedestrian, driver, and AV passenger, all interacting with the AV simultaneously at an ambiguous all-way stop sign intersection. We compare interactions with efficient and prosocial AV behavior strategies, revealing diverging trust perceptions and preferences across user roles. These insights inform a broader research trajectory focused on balancing performance with social considerations in designing trustworthy human-AI collaborations.

Bio: JiHyun Jeong is a postdoctoral researcher at Cornell University working on human-computer interaction and human-robot interaction. Her research develops prototypes and methods to explore performance and social factors that influence collaboration and trust between humans and artificial agents. She holds a Ph.D. and MPS in Information Science from Cornell University, and a BSc in Computer Science and Engineering from Korea University. She is a recipient of an honorable mention for best paper at DIS.

Zoom: https://stonybrook.zoom.us/j/98738234619?pwd=djJFQXBWbkpmblZDT25zNlVMYWpCQT09

Meeting ID: 987 3823 4619
Passcode: 474618

Discover how U.S. Census Bureau Tools can help you find free data for your research projects, community, and more. See how to access the latest American Community Survey and 2020 Census data for various geographies including New York City and Long Island at data.census.gov. Learn about Community Resilience Estimates and how to navigate My Community Explorer; an interactive map-based tool which highlights demographic and socioeconomic data that measure inequality. This session will involve live demonstrations and hands-on exercises for participants. Registrants will receive the Zoom link one day prior to the event.

Please Register for SBU Libraries' AI Club: Exploring Census Data here.
The Future Histories Studio will host Young Maeng, an artist and professor at California State University, Fresno, for a talk exploring the intersection of artificial intelligence (AI) and traditional painting, examining how two seemingly disparate fields can converge to create new artistic expressions.

The lecture is part of the Future History Studio series at Stony Brook University, a platform dedicated to examining the evolving relationship between technology, art, and society.

Young will discuss her innovative approach to expanded painting, an integration of AI-generated images and traditional techniques such as Korean ink and acrylic painting. Through this fusion, she visualizes complex philosophical and ethical questions about the coexistence of humans, nature, and AI companion robots. The lecture will highlight the broader implications of AI in the art world, touching on how AI technologies challenge conventional notions of creativity and human-centric perspectives in art.

Speaker Bio:

Young Maeng is an artist and professor at California State University, Fresno, whose work explores the intersection of artificial intelligence (AI) and traditional painting techniques such as Korean ink and acrylic.

Maeng's innovative approach to expanded painting blends AI technology with traditional methods to visualize complex philosophical and ethical questions surrounding the coexistence of humans, nature, and AI companion robots.

Location: Future Histories Studio
Register here: https://www.eventbrite.ca/e/ai-and-painting-tickets-1021050809457?aff=oddtdtcreator

The University's Main Commencement Ceremony will take place on Friday, May 23, 2025 at 11 am at Kenneth P. LaValle Stadium. Gates open at 10 am.

All guests need a valid ticket to enter LaValle Stadium - no exceptions. Children age 1 and older require a ticket. Seating is first-come, first-served.

Register here.

Join the Conversation: Share Your Thoughts about Learning, Academics, and AI

The world of college is changing fast, and Artificial Intelligence (AI) is at the center of it. We are part of the Institute on AI, Pedagogy, and the Curriculum with AAC&U, and we need to hear from the people AI affects most: you!

This is an open discussion for all students to share their honest experiences, their top concerns, and their best ideas about AI in our academic environment. We'll be diving into these key questions:
  • How can AI actually make learning better or easier? What opportunities do you see for using AI tools to enhance your assignments, research, or skills?
  • What are your biggest worries about AI? Is it about cheating, being graded fairly, or preparing for the job market? How is AI impacting your workload or stress levels?
  • What specific tools, workshops, or policies would help you use AI responsibly and successfully? (Think training, software, or clear rules.)
Date: Monday, December 1st
Time: 12:30pm-1:45pm
Location: West Campus - Location TBD
or
Date: Wednesday, December 3rd
Time: 10:30am-11:45am
Location: East Campus - HSC 2-154B

Please register in advance so we can confirm the room.

Note: Videos will not be shared publicly and comments will only be shared in aggregate.

Your voice matters. Come tell us how AI is affecting your studies, your stress, and your success!
  • Dr. Rose Tirotta-Esposito (Assistant Provost; Director of CELT)
  • Dr. Elizabeth Hewitt (Associate Professor in the Department of Technology and Society (DTS) in the College of Engineering and Applied Sciences)
  • Chris Kretz (Associate Librarian and Head of Academic Engagement at SBU Libraries)
  • Prof. Rajiv Lajmi (Assistant Professor in the School of Health Professions and Chair of Applied Health Informatics)
  • Dr. Matthew Salzano (Assistant Professor in the Department of Communication in the School of Communication and Journalism)
AI can help you write, you hear. AI can save you time, leverage your skills, enhance your productivity. . . . But you also hear: AI output is not reliable, not adequate for advanced tasks/learning, not ethical to use -- you could get in deep trouble for using AI tools without adequate mastery and caution. Which way is it?
Come join this hands-on workshop where you will explore AI tools and their affordances. Engage in writing tasks to learn how to use AI tools effectively and responsibly.
Sign up for a seat now: https://docs.google.com/forms/d/e/1FAIpQLSd0iDTKkTYnkxFd4LkgqbtP97zQSS4FI_MiPVm7p6IY5SGwSg/viewform
Abstract: Autonomous systems, whether on Earth or in space, rely on 3D perception to understand and interact with the world around them. Yet traditional techniques for 3D understanding often depend on human designed features, fixed sensors, and conventional imaging modalities. This constrained approach can limit every stage of perception, from sensing to interpretation to decision making.
In this talk, we'll explore an alternative paradigm for imaging: physically based neural representations for 3D scenes and 3D sensing systems. We will discuss how recent advances in large scale learned representations can be used to jointly optimize both 3D scene models and the design of sensing systems for 3D capture, with the goal of enabling task specific perception systems.
Unlike modern AI models trained on internet scale datasets, these specialized 3D representations typically operate in data sparse regimes and therefore require a different kind of prior. We'll examine how grounding these learned representations in the physics of light transport can improve our understanding of scene structure, and inform imaging system design even with limited data. By connecting physical insights with learned representations, we'll highlight new possibilities for robust, efficient, and adaptive perception in challenging environments.

Speaker: Nikhil Behari is a graduate student in the Camera Culture group at the MIT Media Lab, advised by Professor Ramesh Raskar. His research interests include computational imaging, 3D scene understanding, and multi-agent decision-making under uncertainty, with a focus on automating imaging system design for 3D perception in human and planetary health. His research is supported by the NASA Space Technology Graduate Research Fellowship. He received his bachelor's in Computer Science and Statistics from Harvard University in 2022.