‘A Three-Dimensional Orchestra’
Simulating Air Traffic Control Environments
Comments by Neil Waterman, ASTi
ASTi (Advanced Simulation Technology, Inc.) – simulated audio specialists based in Herndon, Virginia, US – are engaged in a Cooperative Research and Development Agreement (CRADA) with the Federal Aviation Administration (FAA) to support the National Aviation Research Plan (NARP) for 2024–2028 in an effort to “develop a path for certification for simulated ATC environments.”
ASTi’s immersive Simulated Environment for Realistic ATC (SERA) product is the industry-leading SATCE (simulated air traffic control environment) solution, shipping over 300 systems and integrating with more than 20 simulator manufacturers around the world.
The ongoing study will take place across three years in multiple phases. The FAA will install SERA on an Airbus A320 flight simulator at the William J. Hughes Technical Center in Atlantic City, New Jersey. The Mike Monroney Aeronautical Center in Oklahoma City, Oklahoma will host SERA on an A330/320 Level D full-flight simulator for evaluation. ASTi anticipates that the initial results of this research will lead to the issuance of FAA policy, guidance, and best practices for SATCE technologies.
ASTi also recently sold two SERA systems to support the first use of SATCE within a Multi-crew Pilot Licence (MPL) training program. The end user is a major Asian airline with extensive experience in the MPL training pipeline, which obtained sign-off from the national aviation authority for the use of SERA in the program. MPL procedures put more emphasis on simulator training, including the use of simulated air traffic control (ATC).
Rick Adams spoke with Neil Waterman, Commercial Aviation Director at ASTi, about their new AI-driven SATCE technology.
Following are topical highlights of Neil’s comments.

THE CHALLENGE of SPEECH RECOGNITION
We no longer regard speech recognition as an issue at all. And that includes accent recognition. And that’s simply because the technology has improved. Radically. We’ve spent 17 years working on that part of the problem.
I would say that around 2015 things started to improve radically. There was a lot of research being done by Cloud-based solution providers, which is of absolutely no help to flight simulators, which can’t be connected to the internet.
Everybody thinks that speech recognition is easy. You just send it up to the Cloud and you get the results. Well, that’s fine if you can connect to the Cloud. But most flight simulators are completely isolated. And obviously in the military domain, even more so.
So we are able to do all of our recognition locally. We do not rely on external solution providers. It’s all our own technology.
Our first sale of our SATCE system was in 2016, largely as a result of the speech recognition problem becoming not completely solved but significantly improved. And since then, we’ve made great, great leaps and bounds forward to the point now where we honestly do not regard the speech recognition as something to worry about.
REPLICATING ATC EXPERTS
We’ve always understood that the hardest part of this, the ATC, was in the replication of the expert controllers and the other aircraft that are operating in the airspace that the ownship is flying within.
In any given flight you might talk to somewhere between 4 and 10 controllers. More than that if you’re going a very long distance. You’re talking to a bunch of experts that understand exactly how this system is supposed to behave. And what you’re supposed to do within that system is also somewhat prescribed. So we’re operating in a very expert and to some extent constrained set of behaviors.
Replicating the intelligence that those experts represent, and their problem-solving on the fly, is the hardest part of this problem. Speech recognition is a problem up to a point. And once you’ve solved that, you realize that using the output from the speech recognition is where the intelligence is needed.
You’re replicating a bunch of experts which may have many years of experience in their field. Our solution has to replicate that intelligence. And that’s where the AI is. The AI lives in the replication of the behaviors of the controllers, their interaction with each other and their interaction with the other aircraft. In some ways, it’s almost like a three-dimensional orchestra where you have people sequencing things in quite a complicated manner.
Pilots are very familiar with how this works. So when you put a system in front of them, you have to be very, very, very close to the 100 percent replication of what they experience in the real world for it to be believable to them. The immersion comes from the fidelity in that representation.
For the complete commentary from Neil, order the paperback or PDF copy of The Robot in the Simulator – Artificial Intelligence in Aviation Training – https://aviationvoices.com/shop/