Dr. Paul Fodor, associate professor of practice in Stony Brook University's Department of Computer Science, is part of a team awarded a contract by the Defense Advanced Research Projects Agency (DARPA) to improve how artificial intelligence systems interpret complex regulatory language.
The project, led by RTX BBN Technologies, addresses a critical challenge: current AI systems struggle to accurately interpret regulations, which are often filled with exceptions, nuances, and complex legal terminology. Dr. Fodor joins researchers from Johns Hopkins University and RTX BBN Technologies in developing DONATELLO (DeONtic AssistanT Enabled by Large Language Models), a solution designed to translate intricate regulatory texts into actionable logic.
"Regulatory texts are complex, full of exceptions and nuances that challenge traditional programming," said Dr. Brian Ulicny, principal investigator at RTX BBN Technologies. "We need systems that can explain why specific actions are permitted or prohibited in ambiguous situations," he said in a statement previously shared by RTX about the project.
Regulatory Compliance in the Age of Artificial Intelligence
The initiative supports DARPA's Human-AI Communication for Deontic Reasoning Devops (CODORD) program, which aims to enable AI systems to effectively reason about obligations, permissions, and prohibitions. Even advanced AI systems can get things wrong when rules are ambiguous or packed with exceptions.
In military and government operations, personnel often lack immediate access to legal experts who can provide guidance on regulatory compliance. DONATELLO employs deontic logic—a framework focused on permissibility, prohibition, and obligation—to enable AI tools to interpret regulations with explainable reasoning.
Dr. Fodor brings extensive expertise in artificial intelligence, natural language processing, and logic programming to the collaboration. His research background includes semantic reasoning, database systems, and knowledge representation.
Beyond military applications, the technology could transform regulatory compliance in healthcare, financial services, and government administration. Work will be performed across Cambridge, Massachusetts; Baltimore, Maryland; and Stony Brook, New York.
The collaboration places Stony Brook alongside institutions exploring advanced AI applications for national security and public benefit, building on the university’s ongoing work to connect fundamental research with real-world impact.
Read more at the Department of Computer Science website.