PROLOGUE
The Robot in the Simulator: Artificial Intelligence in Aviation Training
By Rick Adams, FRAeS
HALF PRICE SALE ENDS THIS WEEK – ORDER HERE
Discussions, debates and doomsaying about AI are dominating tech news and spilling into public conversations. It seems that AI is everywhere. Its ubiquitousness commands attention.
As a journalist, I am inherently skeptical of hype, including words such as ‘transform,’ ‘revolutionary’ and ‘metaverse.’ Rarely do lofty expectations come to fruition, though innovators and investors spend fortunes chasing their dreams.
In the first half of 2024 alone, Big Tech companies Alphabet / Google, Amazon, Meta / Facebook and Microsoft spent US $106 billion on AI. Analysts expect up to $1 trillion in data center infrastructure within five years. Worldwide spending on AI, including AI-enabled infrastructure, applications, and related IT and business services, will more than double by 2028 to $632 billion, according to International Data Corporation.
The aviation industry global artificial intelligence market size, a mere $728 million in 2022, is estimated to reach $23 billion by 2031.
THE PROMISES and PERILS of AI
Some think Artificial Intelligence will radically transform technology across many domains – design, manufacturing, supply chains, financial and other transactions, healthcare, scientific research, transportation… and it well may as the hype wears off and pragmatism prevails.
The AI that most people have been exposed to – beta versions of enhanced internet search engines, annoying website ‘chat bots’ that offer help but usually frustrate, voice-activated ‘personal assistants’ like Alexa that deliver time, temperature and music – belie the enormous potential of so-called artificial intelligence.
AI does offer the opportunity to solve complex problems, help make smarter and faster decisions, analyze data to spot trends and results humans might not detect, improve education by personalizing curricula and lesson plans, enhance customer experiences, diagnose medical issues earlier, automate repetitive and tedious tasks (avoiding human error and injury), help preserve the environment, even save lives (by accurately predicting natural disasters such as hurricanes and tornadoes).

The new ‘sky is falling’ dystopian fear is ‘the robots are coming.’ At the Yale CEO Summit, 42% of chief executive officers surveyed said AI has the potential to destroy humanity 5-to-10 years from now. We won’t be able to control rogue AI robots because anything we think of they will have already anticipated… a million times faster than our non-NVIDIA brains.
Others warn that millions of people will lose their livelihoods because of AI advancements. Goldman Sachs says 300 million full-time jobs lost by 2030. Accounts, customer service reps, salespeople, analysts, insurance reps, retail clerks…
Those who supposedly will not be replaced: teachers, lawyers and judges, psychologists, surgeons, human resources managers, executives, artists and writers, according to a Nexford University white paper.
The six highest-paying AI jobs: machine learning engineer, AI engineer, data scientist, computer vision engineer, natural language processing engineer and deep learning engineer. (All in short supply currently.)
By 2030, at least 14% of employees globally could need to change their careers due to digitization, robotics, and AI advancements, according to McKinsey Global Institute.
A potential danger to everyone is AI’s insatiable consumption of energy and water. For example, a single ChatGPT request uses nearly 10 times as much energy as a typical Google search. And millions of gallons of water are needed to maintain optimal temperatures for cooling data center servers. A lawsuit revealed that as OpenAI finished training the GPT-4 model, the cluster used about 6% of an Iowa district’s water. Facebook AI researchers call the environmental effects the proverbial “elephant in the room.”
Concerns about AI, according to a Forbes article, include lack of transparency (the black box syndrome), bias and discrimination embedded in training data, data privacy issues, security risks, ethical dilemmas, misinformation and manipulation (examples, deep fakes and social media trolls), dependency on AI, loss of human connection, legal and regulatory challenges, and the ever-present ‘unintended consequences.’
Some also raise alarm about concentration of power with AI dominated by a small number of large corporations and governments, which could further exacerbate economic inequality. Or an ‘AI arms race’ between nations.
THE ESSENCE of AI
Artificial Intelligence is not artificial and it is not intelligent. It is a very good marketing moniker because how else would people get excited about data processing and data analytics? Coined at the 1956 Dartmouth Conference, the phrase won out over the less-than-riveting ‘automata studies.’
In the simplest of terms, AI is a branch of computer science. An evolution of the chain from ENIAC (Electronic Numerical Integrator and Computer) and Alan Turing’s Enigma-deciphering ‘Bombe’ in the 1940s to high-performance computing, supercomputing and ‘Big Data’ of the past half-century.
The most sophisticated data analytics yet, but it’s all ultimately crunching 1’s and 0’s. Not magic, but often brilliant engineering.

Numbers are numbers, of course. But so are digital images, digital video and digital audio. When someone prompts an AI-generated image with a program like Midjourney, the pixels (picture elements) of the source images on which the algorithm has been trained are all decoded into mathematical bytes. Then re-encoded for the new, ‘AI-created’ image. The machine uses statistical patterns in the data to generate images that match the text prompt. The more pixels, the better the image rendition. Simulation visual engineers have known this since the advent of digital image generators in the late 1970s by Singer-Link and Evans & Sutherland. The process for video and audio is similar. Source to data, repackaged data to rendering.
One factor triggering the ‘AI boom’ is the capability to crunch massive amounts of data, much of it derived from sources that were not available in previous eras of supercomputing – the trove of information vacuumed from the Internet, social media posts, eye trackers, biosensors…
The amount of available data has exploded. In just 10 years, from 2010 to 2020, the total amount of new data generated per year grew from 2 zettabytes to more than 64 zettabytes. (A zettabyte is 1021 bytes.)
No human, or teams of humans, could handle such massive amounts of information. It’s said that a superintelligent AI will be able to do in about one second what it would take a team of 100 human software engineers a year or more to complete.
An OpenAI analysis shows that the processing in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time (by comparison, Moore’s Law had a 2-year doubling period).
AI in AVIATION
As a technology- and data-driven domain, it was inevitable that the aviation community would embrace the hope of AI. Aviation organizations are experimenting with AI in many areas:
- Predictive maintenance – identifying component failures before they become critical, detecting patterns, scheduling proactive actions.
- Flight operations – optimizing flight routes, flight plans, fuel consumption, including factoring air traffic congestion and weather conditions.
- Air traffic management – predicting and preventing congestion and delays, as well as tools for controllers to make real-time decisions.
- Safety analysis – hazard identification, patterns of incidents, risk mitigation strategies.
- Training – candidate selection and personalized skills development for pilots, controllers and maintenance technicians, generation of scenarios for critical tasks and emergencies, regulatory compliance monitoring.
This book focuses on the potential for Artificial Intelligence for Aviation Training. Based on my research and conversations with dozens of innovators across the past year, the community is in the conceptual stage. Some are beginning to transition into early implementation. No one yet has definitive solutions, but there are a number of engineering wizards and training experts very focused on developing them.
The book has two main parts:
- The Regulatory Environment, primarily the activities of the European Union Aviation Safety Agency (EASA) and the US Federal Aviation Administration (FAA). Works in progress… with some overlap.
- Case Studies and Conversations with many of the early adopters. These are the people on the front lines of AI in the aviation training community, both civil and some military. They represent large companies and startups, and range from the Americas to Europe and Asia-Pacific. They are ordered alphabetically so as not to suggest any merit ranking of their efforts.

There is also a Resources section appended with a simple primer on Artificial Intelligence in general plus links to key documents referenced and company websites.
A few things I think I have discovered in the process of this project:
- Developing a viable AI application for aviation training is not a simple or easy matter. It’s more than opening up ChatGPT and writing a one-line text prompt. There’s a fair bit of manual effort, not only in determining the data sources to be used but also in the collection, structuring and validation testing… before a training organization can put something practical into practice.
2. For that reason, and since training organizations use much of the same base information, it would be beneficial to all for industry stakeholders to collaborate on a common ‘data lake’ of validated information sources and best practices.
3. It’s not necessary to use all the firehose of data that’s available. Much of it may not be relevant to the task. Focus on the data which will produce the most value in terms of performance and efficiency for your particular operation.
4. Data collection can be contentious. Pilots and instructors may be nervous about flaws exposed. However, it is encouraging that younger generations embrace data for learning.
5. Trust of AI is critical for acceptance. And transparency is critical for trust. For a safety-critical domain such as aviation, it’s vital to understand how the AI system works – explainability – it’s okay to build a proprietary system; just please don’t say ‘we don’t know how it works’ if you want users to embrace your app.
AI will continue to evolve… rapidly. Aviation training needs to embrace it as an opportunity to significantly improve how aviation personnel are trained and thereby enhance operational safety. And always, always with humans in the loop.