Verifying Neural Networks to Avoid Air-to-Air Collisions

When we think of aircraft, we typically think of them as planes that are manually controlled by human pilots, but unmanned drones are already in the sky.  The day is coming where we may have autonomous aircraft flying us all over the world… but are you ready to fly on a plane with only a computer as the pilot on board? Would you trust it not to collide with other planes?

The FAA has been developing the Airborne Collision Avoidance System X (ACAS X), a network of sensors that warn pilots of potential collisions, and suggest maneuvers to avoid those collisions. However, only in recent years have unmanned aircraft been considered. Just last month, a Russian fighter jet collided with a U.S. Reaper drone, causing it to crash into the Black Sea.  

Mathematical proof is one way to address avoidance of aircraft collisions. Stony Brook University’s Stanley Bak from the Department of Computer Science is an expert on the emerging field of verifying neural networks, and has turned his attention to the challenge.

Evaluation of Neural Network Verification Methods for Air-to-Air Collision Avoidance'' is a research paper that proposes a set of 10 closed-loop properties for safety evaluation of a neural network compression of the Airborne Collision Avoidance System unmanned variant called ACAS Xu. Bak collaborated with researchers from Vanderbilt University, University of Nebraska, and the U.S. Air Force Research Laboratory to develop and test these properties. 

“The goal was to evaluate a neural network control system to check for safety in ways other than just running simulations,” says Bak. “If you design a system and create an accurate simulation model, you can change the initial conditions, run your model, and see what happens– but then that cannot really be used to prove safety of the system when you have continuous states, given the infinite number of possible initial conditions and orientations of the aircraft. When running individual tests, maybe you can find some errors, but you will never be able to guarantee that there are no errors. In this work, we instead tried set-based methods to evaluate safety for this type of system.”

The 10 closed-loop properties that underwent safety evaluation consist of scenarios with initial state uncertainty where ownership aircraft is in the presence of a nearby intruder aircraft. Two safety verification tools were evaluated on these properties– the Neural Network Verification Tool (NNV), and the Neural Network Enumeration Tool (nnenum) which Bak helped develop. They were used to verify safety properties concerning 5 of the 45 neural networks under ACAS Xu. 

Both verification tools were able to verify the safety of the ownership aircraft in the presence of an intruder aircraft in all scenarios for the uncertainty being considered, leading the research team to consider the differences between the tools. For most scenarios, Bak’s nnenum tool was faster than the NNV Tool while considering more initial state uncertainty. 

“This study is both a positive result and a negative result,” says Bak. “The importance of this work was that it did not just show that set-based analysis is possible, but it showed the limitations in terms of scalability.” 

Considering the other uncertainties to be considered that this paper did not handle, such as heading and velocity, Bak mentions the work that this research paper calls upon. 

“This work motivated a followup study, published last year in the NASA Formal Methods conference titled “Neural Network Compression of ACAS Xu Early Prototype is Unsafe: Closed-Loop Verification through Quantized State Backreachability.” That work had a similar goal using a slightly different verification strategy that proved to be more scalable and we were able to formally check for safety from all possible initial conditions, finding some edge cases where surprisingly the system did not work as intended” says Bak. 

It is a harder problem to avoid hostile actions, like the crash between the Russian fighter jet and the U.S. Reaper drone. “Collisions may be unavailable in the presence of an adversarial intruder aircraft that is moving faster than the ownship, since the intruder can always turn towards the aircraft,” says Bak. “In our analysis, we currently assume the intruder is flying straight and unresponsive, but not malevolent. With these assumptions, the system has a chance of avoiding collisions, but it is still difficult to make sure the system will actually avoid collisions in all cases.” 

Additionally, this neural network compression of the Airborne Collision Avoidance System Xu has been considered for a verification of neural network competition– particularly, the International Verification of Neural Networks Competition that Bak helps run. 

Safety is a primary concern when it comes to aircraft, especially in the age of AI. Though we may have a long way to go before collisions are completely avoidable, work like Bak’s brings us one step closer to safer air travel, whether piloted by a human or a machine. 

-Sara Giarnieri, Communications Assistant