Niranjan on AI and NLP: Language Was Never Just About Words

Part twelve of our AI Researcher Profile series features Niranjan Balasubramanian, associate professor of Computer Science at Stony Brook University, to discuss his research interests and experience teaching language to artificial intelligence.

Niranjan Balasubramanian Profile Image

AI Institute: You were a postdoctoral researcher at the Turing Center at the University of Washington, and you have a Ph.D. in Computer Science from the University of Massachusetts Amherst. What sparked your interest in Computer Science and NLP,  and what are your current areas of research?

Niranjan Balasubramanian: My interest in computer science started almost by chance. I was good at math and science growing up, and a family member suggested computer science would be a good field. I pursued it, but during my undergrad, I wasn’t particularly drawn to technology for its own sake.

That changed in my master’s program at Buffalo. I took a course in cognitive science with Professor Shapiro, which opened up the philosophical side of natural language processing. Our discussions revolved around understanding whether any arbitrary technology could process language in the way humans do. Around the same time, my mentor there was working on information extraction. Seeing how something that seems simple for us is so difficult for computers pulled me in.

It also made sense because I’ve always loved language. I read a lot, I like to write, I enjoy wordplay and poetry. NLP felt like a natural place for me within computer science. Now my work focuses less on the mechanics of language alone and more on what language allows us to do. I’m interested in reasoning and decision-making through language, building agents that can help people accomplish tasks by understanding context, goals, and reality — not just words on a page.

AI: How has your focus and work in NLP changed over time?

NB: My view of the field has evolved gradually. I don’t see language processing as only about parsing words or sentences. Sentences are never just words; they’re about what the words do in the world. If you want a system to understand a language, it needs to understand the culture, context, and reality of its speakers. That shift — from text as an object to language as an interface for reasoning and action — has been central to my projects.

AI: NLP research has seen tremendous growth over the last few years, more so since OpenAI’s ChatGPT was released to the masses in 2021. How has this affected your research interests?

NB: Large language models like ChatGPT crystallized the idea for me: if you want real language understanding, you need everything. You need world knowledge, common sense, syntax, and context. Scaling models have made that possible, but it comes at a huge computational, financial, and environmental cost.

That’s why efficiency became a central concern for me early on. I’ve been interested in it since my PhD at UMass Amherst, where I worked on building information access systems that could run on small devices. Now it feels even more urgent. Efficiency matters for three reasons: lowering costs, reducing energy use, and broadening access. It’s also intellectually exciting. Having constraints forces us to come up with better ideas.

Niranjan Balasubramanian' Lab on Outing
Niranjan with his lab students.

AI: Could you tell us a little bit about your ongoing NLP research?

NB: A lot of my current work focuses on reasoning and action through language. One direction is building agents that can handle everyday tasks — things like planning an outing with friends, coordinating schedules, figuring out preferences, and interacting with apps. Here, the interface is language, but the challenge is planning, making decisions, and acting in the real world.

We’re also deeply focused on reliability. If these systems are going to automate tasks for us, they need to be robust. That means considering: what do we care about in a specific problem setting? How do we evaluate it? We build benchmarks and datasets around these criteria, analyze where models fail, and then target improvements. This evaluation-first ethos is central to my lab.

Another project I’m excited about is making smaller models and multi-agent systems competitive with large models. Big models can solve complex tasks, but they’re costly. Smaller models are cheaper, but less capable. We’re experimenting with curriculum learning and multi-agent collaboration to close that gap. The goal is to improve efficiency without sacrificing capability.

AI: What are some of the pitfalls and challenges in today’s technology that NLP researchers should be mindful of?

NB: We are always hopping from one artifact to the next, one technique to the next. I firmly believe there is a need and space for all kinds of research. In fact, experimental projects yield higher value when they lead to foundational understanding of how things work or why they work. Slow work has merit even in a rapidly evolving field such as ours. My advice is, if you want to work carefully, even if it takes time, there is a place for it. Do not be deterred by the break neck pace at which things seem to be moving.

AI: What does the future of NLP hold for us?

NB: I don’t think NLP has ever been just about language. It’s about meaning in context. That’s where I see the field heading — toward agents that don’t just parse sentences, but reason across reality itself. Going forward, models won’t only process text, they’ll take in streams of reality: sound, images, video, location, even what’s happening on your device. That creates new opportunities, but also tremendous challenges.

Imagine asking a question while the system also takes in the sounds around you, your location, or what other apps you’re using. That’s the frontier I’m most interested in — building models that reason safely and act responsibly in a complex, multimodal world.

AI: What advice would you share with those who are mentoring, or wish to mentor future generations, in the fields of NLP and human behavior?

NB: With NLP, we are able to target increasingly complex and quintessential human endeavors that span art, health, legal, and society as a whole. There’s decades — and in some cases even centuries — worth of work in these areas. Engaging the research community in these fields and building upon their foundational knowledge will result in useful, sustainable, and responsibly developed systems.

News Author

Ankita Nagpal