Zoom Like a Pro! Unlock Whiteboard, Polls, AI Companion, and more to supercharge student participation. This hands-on workshop explores innovative ways to use Zoom's built-in tools to enhance active learning activities in your classes. Learn how to utilize the Whiteboard feature to make collaborative work more engaging, use Polling and Quizzes for instant feedback, AI Companion for summary, and Breakout Sessions for group activities.

Register here: https://stonybrook.zoom.us/meeting/register/tJckf--rpj4pGdRV0ItgTW8Lk7g…

Abstract: Generative visual models like Stable Diffusion and Sora generate photorealistic images and videos that are nearly indistinguishable from real ones to a naive observer. However, their grasp of the physical world remains an open question: Do they understand 3D geometry, light, and object interactions, or are they mere pixel parrots of their training data? Through systematic probing, I will demonstrate that these models surprisingly learn fundamental scene properties--intrinsic images such as surface normals, depth, albedo, and shading (à la Barrow & Tenenbaum, 1978)--without explicit supervision, which enables applications like image relighting. But I will also show that this knowledge is insufficient. Careful analysis reveals unexpected failures: inconsistent shadows, multiple vanishing points, and scenes that defy basic physics. All these findings suggest these models excel at local texture synthesis but struggle with global reasoning: a crucial gap between imitation and true understanding. I will then conclude by outlining a path toward generative world models that emulate global and counterfactual reasoning, causality, and physics.

Bio: Anand Bhattad is a Research Assistant Professor at the Toyota Technological Institute at Chicago. He earned his PhD from the University of Illinois Urbana-Champaign in 2024 under the mentorship of David Forsyth. His research interests lie at the intersection of computer vision and computer graphics, with a current focus on understanding the knowledge encoded in generative models. Anand has received Outstanding Reviewer honors at ICCV 2023 and CVPR 2021, and his CVPR 2022 paper was nominated for a Best Paper Award. He actively contributes to the research community by leading workshops at CVPR and ECCV, including Scholars and Big Models: How Can Academics Adapt? (CVPR 2023), CV 20/20: A Retrospective Vision (CVPR 2024), Knowledge in Generative Models (ECCV 2024), and How to Stand Out in the Crowd? (CVPR 2025). For more details, visit https://anandbhattad.github.io/


Abstract: AI has achieved remarkable advancements in image recognition and natural language processing. However, its applications in Earth and environmental sciences are still emerging. Unprecedented data from satellites, sensors, and in-situ measurements oIers new opportunities to improve physics-based models and forecasts of environmental systems with AI and to gain deeper insights into these phenomena. Extreme systems, such as weather and climate events, pose distinct challenges for AI, such as limited sampling of rare events, non-trivial data augmentation, errors-in-variables, and complexities of transfer learning across diverse tasks. In this talk, we will explore some of these challenges and showcase AI architectures designed to address them. We will use specific examples of forecasting dust storms, precipitation extremes, flash floods, and drought events in the Middle East. Finally, we will discuss a different AI approach for studying sinkhole formation in the Dead Sea.

Speaker: Prof. Yinon Rudich, Department of Earth and Planetary Sciences, Weizmann Institute, Israel


Join Zoom Meeting
ID: 98731258879
Passcode: cJjGQJqP

Abstract: The development of embodied AI has largely focused on scaling data and computational power, often at the cost of energy efficiency. In contrast, biological intelligence achieves remarkable adaptability with minimal resources, inspiring a shift toward neuromorphic AI, an approach that mimics the structure and dynamics of biological neural systems. In this talk, I will explore the promises and challenges of neuromorphic computer vision from three key perspectives: algorithms, robot actions, and data. First, I will discuss algorithmic advances, including continuous visual hull reconstruction, continuous-time human motion field estimation, and unsupervised independent motion segmentation. Next, I will illustrate how neuromorphic vision enables agile robotic actions by leveraging event-based perception for real-time decision-making. Finally, I will address challenges in training data-driven models with event data, highlighting strategies to enhance data availability and efficiency. By integrating these elements, neuromorphic AI paves the way for energy-efficient, high-performance embodied intelligence in dynamic real-world environments.

Speaker Bio: Ziyun (Claude) Wang is a fifth-year Ph.D. student in the General Robotics, Automation, Sensing & Perception (GRASP) Lab at the University of Pennsylvania, advised by Professor Kostas Daniilidis. His research focuses on developing algorithms for neuromorphic computer vision and integrating them with real hardware to enable agile perception in embodied AI systems. Prior to his Ph.D., he worked at the Samsung AI Center New York, where he developed 3D reconstruction techniques for robotic applications and earned three patents. He also contributed to the Apple Vision Pro team, enhancing user comfort for AR glasses. His research work has been recognized at major computer vision, robotics, and machine learning venues including the AAAI Conference on Artificial Intelligence (AAAI), European Conference on Computer Vision (ECCV), International Conference on Learning Representations (ICLR), Conference on Computer Vision and Pattern Recognition (CVPR) workshops, and IEEE Robotics and Automation Letters (R-AL), with an oral presentation at ECCV placing in the top 2.7%. His research aims to drive the development of next-generation bio-inspired AI systems, enabling more efficient, adaptive, and intelligent embodied perception.

Abstract: Astronomers slowly made sense of the cosmos by following the stars night after night. I suggest we examine human identity in a similar way. Let's observe the words individuals use to describe themselves day after day. In this presentation, I will introduce ipseology - a new approach to studying human selves. Ipseology is the systematic, empirical study of ipseity: selfhood, individuality and the elements of identity. The primary idea is that we can learn a lot about people from their self-authored self-descriptions - especially if we follow their revisions over time. I will discuss results from sampling millions of social media bios over more than a decade and present new approaches for observation in the Post-API age.

Bio: Dr. Jason Jeffrey Jones is a computational social scientist whose expertise includes online experiments, social networks, high-throughput text analysis and machine learning. He is interested in humans' perceptions of themselves and the developing role of artificial intelligence in society.

Dr. Jones is the director of CSSERG (pronounced sea surge): the Computational Social Science of Emerging Realities Group. CSSERG is a team of scholars committed to cross-disciplinary collaboration, united by common computational methodologies and always with eyes on the near future. CSSERG has studied the effectiveness of virtual reality in evoking empathy, the dynamics of gender stereotypes in language over decades and temporal trends in personally expressed identity.

This seminar will take place in person and online (zoom link below):

Join Zoom Meeting
https://stonybrook.zoom.us/j/93686609778?pwd=KdHVyIbU3ymML6hTchXsm6JLYKLSru.1

Meeting ID: 936 8660 9778
Passcode: 638699

The Future Histories Studio welcomes Moontae Lee, LG AI Research.


Generative AI is transforming how we understand, create, and interact with information. Large Language Models (LLMS) comprehend contexts, answer non-trivial questions, and spark creative ideas. This talk introduces the evolution of these models, highlighting the most recent advancements in planning, reasoning, and evaluation. The talk also touches on the criticalconsiderations for both model developers and users, carefully addressing limitations of LLMs as well as ethical and societal implications. Finally, the talk provides ongoing directions in researchand production: from the rise of personalized AI agents to the future frontiers of AI.

Moontae Lee is the Director of the Superintelligence Lab at LG AI Research and an Assistant Professor of Information and Decision Sciences at the University of Illinois Chicago. His journey with Large Language Models began as a visiting scholar at Microsoft Research in 2019, continuously consulting the Deep Learning Group at Redmond until joining LG. He holds a PhD in Computer Science from Cornell, an MS from Stanford, and BS degrees in Computer Science, Mathematics, and Psychology from Sogang University. He has been an area chair for major AI conferences and earned recognition in Operations Research and Computational Social Science, including awards from INFORMS and Amazon.

His research interests include:
● Computational Creativity, Algorithmic Awareness
● Retrieval-Augmented Generation and Evaluation
● Code Generation, Reasoning, Planning
● Fine-grained Alignment from Human/AI Feedback in Generative AI
● Large Time-series Models, Diffusion/Consistency
● Machine Unlearning
● Ranking Monopoly, Voting Fairness
● AI Safety, Ethics, and Market Impacts

Join us in person @ Future Histories Studio Staller Center for the Arts, 4222

The Renaissance School Of Medicine Department of Scientific Affairs and its Single Cell Genomics facility are excited to host a special seminar and discussion on AI and single cell genomics analysis:

With the decreasing cost of sequencing, many biobanks and large research cohorts have moved to whole genome sequencing (WGS) and single-cell RNA-seq. However, making use of this deluge of data remains a challenge. I will discuss statistical and deep learning approaches that we are exploring to address the challenge of noncoding variant interpretation, including our work as part of the Alzheimer's disease sequencing project.

Speaker: David A. Knowles, PhD. Asst. Professor of Computer Science, Interdisciplinary Appointee in Systems Biology, Columbia University Core Faculty Member, New York Genome Center

Join us in person: Health Science Tower Level 3, Lecture Hall 5

Description:

Curious about what AI image generation tools are out there and how they work? Come down to the library Galleria space (outside the Central Reading Room) to see some demonstrations and learn more about them.

Librarians Chris Kretz and Ahmad Pratama, along with David Ecker of DoIT, will be hosting Explore AI demos from Monday - Wednesday this week on different topics. Whether you're new to AI or an experienced user, stop by and take a look!

Location: Library Galleria

Abstract: Recent studies have highlighted the vulnerability of Natural Language Processing (NLP) and Vision-Language Models (VLMs) to backdoor attacks, posing significant security risks. Understanding these attack strategies is crucial for assessing model robustness and developing effective defenses. This thesis proposal aims to investigate the vulnerability of language and vision-language models, analyze abnormal behaviors in backdoor-attacked models, and develop defense methods to enhance safety of modern machine learning models at deployment.


We investigate the internal mechanisms of backdoored NLP models, identifying a distinct attention focus drifting phenomenon, where trigger tokens hijack attention regardless of the input context. Through comprehensive qualitative and quantitative analysis, we provide insights into the underlying mechanisms that enable backdoor attacks. Building on these insights, we propose detection methods to differentiate backdoored models from clean ones, through inspecting both the attention distribution and the model predictions. To better understand the vulnerability, we develop advanced backdoor attack strategies targeting language models in classification tasks. For BERT variants, we introduce Trojan Attention Loss (TAL), a novel method that directly manipulates attention patterns to enhance backdoor effectiveness, ensuring stealth and robustness. Vision-Language Models have demonstrated strong performance in recent years. Yet their vulnerability is largely underexplored. We investigate advanced backdoor attack strategies on Vision-Language Models, focusing on image-to-text generation tasks. We demonstrate how backdoors can be embedded in complex multimodal tasks while maintaining semantic integrity under poisoned inputs. Additionally, we propose innovative techniques for injecting backdoors without requiring access to the original training data, expanding the feasibility of real-world attacks.

This proposal provides novel insights into the internal mechanisms of backdoored models, propose effective detection strategies, and develop advanced attack techniques that expose critical vulnerabilities. These findings underscore the urgent need for robust security measures to defend against emerging backdoor threats in deep learning models. The results have been published in top venues including ICLR, ECCV, NAACL, EMNLP, etc.

Speaker: Weimin Lyu


Zoom link: https://stonybrook.zoom.us/j/99880605139?pwd=cfWbRG6n9v3GXEa7OqvXa5cOp5eLBv.1
Meeting ID: 998 8060 5139
Passcode: 843302

Learn how to unlock the power of Image and visuals that will enhance your work by asking the experts questions in-person

No registration required - just stop by!

Location: Frank Melville Jr. Memorial Library Galleria (across from the Central Reading room)