Location: NCS 120

Algorithmic Trading with Large-Scale Deep Learning


At XTX Markets, we view algorithmic trading as one of the most compelling real-world frontiers for deep learning and foundation models. Every day, our systems generate forecasts for tens of thousands of financial instruments and execute over $300B in global trading volume: fully automated, with no discretionary human intervention. This domain combines massive data scale with high noise, adversarial dynamics, and frequent regime shifts, making it both scientifically challenging and commercially impactful. For machine learning researchers, it serves as a rigorous proving ground where advances in time-series modeling, large-scale optimization, representation learning, and foundation models can translate directly into measurable real-world outcomes. This talk will provide a high-level overview of our research agenda, infrastructure, and key open challenges at the intersection of large-scale AI and quantitative finance.

Biography:

Dr. Zhangyang Atlas Wang is the Research Director at XTX Markets, one of the world's leading high-frequency trading firms. He founded and leads the firm's AI Lab in New York City, focused on developing large-scale foundation models for financial time series and market data, powered by XTX's proprietary AI infrastructure. He is currently on leave from his position as the Temple Foundation Endowed Associate Professor at The University of Texas at Austin. His academic research has received numerous awards, and he has mentored a broad network of Ph.D. students and postdoctoral researchers. Many of his alumni now hold tenure-track faculty positions (eight to date) or senior research roles in industry (nineteen and counting). For more information about his group and alumni, please visit: https://www.vita-group.space/team.


Refreshments will be served after the seminar in the first floor atrium.

What do multiple sclerosis and algorithms have in common?

Seemingly very little: one is a complex neurologic disease, while the other is a mathematical framework that powers everything from GPS systems to artificial intelligence. But at Stony Brook University’s latest Provost’s Lecture Series event, both were presented as examples of the same essential pursuit: understanding what lies beneath the surface.

Stony Brook, NY, February 22, 2026 — Scroll through Instagram or X long enough, and you’ll see it — a reel insisting that “fruits are citrus, so you shouldn’t eat them with milk,” a thread warning “protein shakes wreck kidney function,” a carousel promising “this workout routine will fix your PCOS in 30 days.” Every third post seems to offer a health hack, often backed by a chart, a DOI link, and just enough scientific language to sound convincing.

But behind those posts is a tangle of dense scientific research that few people ever read.

Two Stony Brook University research initiatives were awarded seed funding through the SUNY Technology Accelerator Fund (TAF), which supports groundbreaking research opportunities and helps faculty inventors and scientists turn their research into market-ready technologies.

SUNY TAF targets critical research such as feasibility studies, prototyping and testing, which demonstrate that an idea or innovation has commercial potential. The goal is to accelerate time to market for these innovations and increase their market readiness for potential investors, strategic partners and customers.

Researchers have long recognised that for artificial intelligence to truly collaborate with people, it must accurately anticipate human intentions. Peter Zeng, Weiling Li, and Amie Paige, from Stony Brook University, alongside Zhengxiang Wang, Panagiotis Kaliosis, Dimitris Samaras et al, investigated how Large Visual Language Models (LVLMs) establish ‘common ground’ during communication , a fundamental aspect of human interaction. Their new study, detailed in a referential communication experiment, reveals a significant limitation in LVLMs’ ability to interactively resolve ambiguous references, using a unique dataset of 356 human and machine dialogues.