Closing the Competency Gap: Scaling CBTA/EBT with Evidence-Based AI Analytics
Across the global pilot training community, the industry has invested enormous effort into designing and implementing Competency-Based Training and Assessment (CBTA) and Evidence-Based Training (EBT). Yet despite strong intentions, well-designed frameworks, and dedicated leadership teams, many operators encounter the same uncomfortable truth: a persistent and widening gap between management expectations and what is actually happening in the briefing room, the simulator, and the debriefing.
This article examines that gap through the lens of practical implementation. It highlights the human factors affecting instructors, the structural weaknesses in how data is collected, and the organisational consequences of distorted assessments. Finally, it explores a forward-looking vision—how digitalisation and evidence-based AI analytics can strengthen instructor performance, enhance training quality, and accelerate the transition from mixed to baseline EBT implementation.
A Promise Undermined by Reality
Most airline training leaders are confident that they have built solid infrastructure for EBT. Their organisations have invested in strong curricula, structured briefing guides, high-quality lesson plans, validated competency frameworks, and even video-supported instructor training.
On paper, everything looks right.
In practice, much depends on what happens once the door closes: in the briefing room, in the simulator, and in the debriefing. These are the moments where CBTA/EBT actually live or die. And here, a new human factor emerges—one that is neither malicious nor negligent, but deeply human: the instructor’s comfort zone.
Instructors are skilled professionals, but they are also subject to workload, fatigue, time pressure, and habit. In a demanding training environment, reverting to familiar patterns is natural. When a new paradigm such as EBT arrives—one that requires deeper observation, structured facilitation, root cause analysis, and high-quality narrative feedback—the burden increases.
The result is a predictable drift from the idealised programme design to a more traditional form of instruction.
The Instructor Human Factor
Aviation has long understood human factors on the flight deck. But CBTA/EBT introduces a parallel dimension: the Instructor Human Factor. Despite training and goodwill, instructors face powerful pressures that influence behaviour:
- Time constraints: delivering multiple sessions back-to-back.
- Administrative load: complex grading, OB entry requirements, and narrative writing.
- Cognitive fatigue: switching between facilitation, evaluation, and coaching modes.
- Risk aversion: fear that issuing low grades may trigger consequences for pilots.
- Comfort with old habits: reverting to fault-listing instead of facilitation.
None of this reflects a lack of professionalism. It reflects the challenge of sustaining a high-complexity, high-cognitive-load training model without strong systemic support.
A Structural Weakness: Grading and Data Capture
One of the most significant contributors to the competency gap lies in how current EBT systems capture performance data.
The Grade 2 Problem
Grade 2 carries stigma. The descriptor: “Minimum Acceptable Standard” is, at least, unfortunate.
It also requires more work: mandatory OB recording and narrative comments. For instructors under pressure, avoiding a Grade 2 is an easy path. Many ‘2-all-day-long’ performances are reclassified as marginal 3s—quietly eroding data quality.
A Two-Grade System in Practice
Because Grades 3 and 4 do not require OBs or comments, the result is:
- An informal two-grade system: almost everything becomes a 3 or 4.
- Massively distorted grading distributions—false bell curves and grade compression.
- A data desert for 80–90% of pilot records, (one major EBT operator reports 99% of its pilots are in the grade 3 and 4 distribution).
- Little or no knowledge transfer between instructors.
- Minimal personalised training for the pilot.
This is not just an instructor problem; it is a system design problem.
Minimal Root Cause Analysis and Facilitation
Two foundational EBT practices—root cause analysis (RCA) and facilitation—often remain weakly implemented.
Root Cause Analysis (RCA)
Most instructors have never received structured RCA training. When underperformance appears, instructors frequently address surface errors rather than underlying competencies. Without RCA:
- Pilots don’t understand what behaviour truly needs to change.
- Instructors can’t provide specific, actionable insights.
- Training management receives limited visibility into systemic weaknesses.
Facilitation
Facilitation is central to CBTA. Yet many sessions default to traditional patterns:
- A quick “How do you think that went?”
- Followed by a list of errors.
- Few open questions.
- Little guided self-discovery.
For pilots, the resulting debrief is shallow, and little learning transfers to line operations.
The Cost of Mixed Implementation
With distorted data, weak RCA, and inconsistent facilitation, operators remain trapped in mixed implementation. Approximately 75% of EBT-certified airlines have not transitioned to baseline implementation—even years after initial approval.
The cost is significant: studies suggest around €900 per pilot per year in unrealised efficiency benefits once the three-year mixed-implementation window expires. For large fleets, the opportunity cost is substantial.
A Vision for Evidence-Based AI Analytics
Imagine a different environment—one where instructors and pilots receive structured support, and where data collection is effortless, rich, and accurate.
1. Rich Competency Data
Digital observations are captured at briefing, during the sim, and in the debrief. These observations are:
- Tagged to competencies and OBs
- Analysed by a consistent, expert CBTA model
- Made available to both pilots and instructors
This provides a full, unbiased picture of performance.
2. Supported Root Cause Analysis
The AI system prompts instructors with:
- “5-Why”-style RCA questions
- Competency-linked causal suggestions
- Structured pathways to identify underlying issues
This ensures that every debrief explores the “why”, not just the “what”.
3. Supported Facilitation
Instructors receive:
- Open, adaptive facilitation questions
- Prompts based on demonstrated and missing OBs
- Guidance for structured, meaningful debriefs
The burden is reduced, yet the quality increases.
4. Automated Drafting of OBs, Comments, and Assessment
Before and during the session:
- The system surfaces relevant competencies and OBs
- Draft assessments are prepared automatically
- Narrative comments are generated for all grades
- Instructors simply review and approve
- If they need to, they can input their own observations by simply speaking to the system, in their native language, if that is preferred.
This eliminates tedious manual writing and restores meaningful data capture.
5. Continuous Instructor Development
Using the same dataset, instructors receive:
- Individualised development insights
- Session-by-session feedback
- Reports on facilitation strength, RCA use, and grading patterns
- Objective calibration tools
For training management, this becomes a powerful Continuous Professional Development engine for their Instructor and examiner cohort.
6. Integrated Data for Safety and Planning
By combining:
- EBT competency data
- SMS and FDM datasets
- Failure/approach clustering
- Other evidence sources
Operators gain a unified safety and training picture that supports regulatory approval for baseline implementation.
| Feature | Details |
| Rich Competency Data | Digital observations captured at briefing, during the sim, and debrief; tagged to competencies and OBs; analysed by expert CBTA model; available to pilots and instructors; provides full, unbiased picture of performance. |
| Supported Root Cause Analysis | AI prompts instructors with “5-Why”-style RCA questions; competency-linked causal suggestions; structured pathways to identify underlying issues; ensures debrief explores the “why”, not just the “what”. |
| Supported Facilitation | Instructors receive open, adaptive facilitation questions; prompts based on demonstrated and missing OBs; guidance for structured, meaningful debriefs; burden reduced, quality increases. |
| Automated OBs, Comments, and Assessment Drafting | System surfaces relevant competencies and OBs; draft assessments prepared automatically; narrative comments generated for all grades; instructors review and approve; eliminates manual writing, restores meaningful data capture. |
| Continuous Instructor Development | Instructors receive individualised development insights; session-by-session feedback; reports on facilitation strength, RCA use, grading patterns; objective calibration tools; becomes powerful CPD engine for training management. |
| Integrated Data for Safety and Planning | Combining EBT competency data, SMS and FDM datasets, failure/approach clustering, other evidence sources; operators gain unified safety and training picture supporting regulatory approval for baseline implementation. |
The Path Forward
To unlock the full promise of EBT, the industry must address both the structural weaknesses in current grading systems and the human factors that shape instructor behaviour. Digital transformation—supported by carefully designed AI analytics—offers a pathway to:
- Reduce workload
- Improve data quality
- Strengthen instructor performance
- Personalise pilot development
- Move confidently towards baseline EBT
- Realise the long-promised training efficiency and safety gains
AI is not here to replace instructors; it is here to empower them. EBT and AI are natural companions. With the right vision and tools, the industry can close the competency gap and build the next generation of effective, data-driven pilot training.
As JFK once said, “Change is the law of life. And those who look only to the past or present are certain to miss the future.” The future of EBT is within reach—if we choose to embrace it.