Academy Cadets ‘Embracing Data’
A Conversation with Acron Aviation Data VP Mitesh Patel
The L3Harris Commercial Aviation Solutions’ data analytics business – essentially the acquisition in 2019 of Flight Data Services – focuses on providing flight data monitoring (FDM) solutions to airlines in terms of operational safety monitoring. Mitesh Patel is VP and GM, Flight Data Analytics.
L3Harris CAS also produces flight simulators to various airlines, operates type rating and recurrent training centers, and an ab initio academy. In November 2023, private equity firm TJC L.P. announced that ‘an affiliate of the Resolute Fund VI’ entered into a definitive agreement to acquire CAS from L3Harris.
The business was re-branded in March 2025 as Acron Aviation.
Rick Adams spoke with Patel about CAS data capabilities, including applying data analysis for academy cadets.
Rick Adams: Is the focus primarily airline operations?
Mitesh Patel: We work with various airlines where they download their data of the flight data recorders and we analyze it and provide feedback to them around their operational safety. It complements our avionics business where we manufacture flight data recorders for Airbus and Boeing and any aircraft type.
But we have deployed it in our flight training academy, in Sanford (Florida, US), for example. We are now monitoring all the aircraft that are used for cadet training. One of the things we’re finding is it’s really helped enhance safety, and that was a primary drive to implement, especially when cadets start doing their solo training. It gives more visibility around what’s actually happening in the aircraft. You don’t normally have that information once the aircraft is airborne, right? You don’t get the detailed information on what the student was actually doing, how they’re performing, how the aircraft was being flown for the whole duration of the flight.
We started seeing some great outputs as a result. One of the things we’re finding is that the cadets are now embracing data. Very happy to look at the data, look at their flights, review them with their instructors to identify areas of training opportunities where they can improve or rectify or work to improve the way they flew the aircraft.
The other part, the drive to create cadets who are ‘cockpit ready’ is a phrase we use, so we are instilling in them a culture that helps them when they transition to working for an airline: safety reporting of events, self reporting, etc.
As we mentioned that we were implementing this, all the cadets were really excited because – maybe it’s a generational thing – they just seem to be very open to having data being recorded and using that data to give them feedback on what they’re doing. They’re not adverse to having data capture, have no fear of it being used in any sort of negative way. And they actually ask, can I look at my flight? How did I perform? Can I monitor myself against other cadets? It’s quite refreshing that they’re not worried about the data being used for punitive purposes. They’re open. They’re embracing it for positive feedback and positive outcomes.
Rick Adams: What trends are you finding from the data in the trainer aircraft?
Mitesh Patel: You might get a group of cadets that are not performing well in particular flight phases, maybe approach and landing or taxiing, etc. We started trying to correlate that with instructors or with the background of their training. And very quickly you can start seeing trends in identifying what the root cause is. And a lot of times it may not be the cadet themselves. It could be the way they were trained leading up to that particular maneuver.
Rick Adams: Does the data play into grading?
Mitesh Patel: One thing we’re trying not to do is use data to grade individual trainees but use the data to identify trends and patterns, which then allows us to identify opportunities on how we can improve the way we deliver training and the effectiveness of training.
I think in the past, or even now, there’s been a drive to use data from simulators to grade individual trainees. Although that is definitely possible, I’m not sure that it delivers the outcomes that really drive improvements in training or effectiveness of training.
Rick Adams: In the data that you have been collecting from the solos, have you changed any part of the curriculum, modified a course to reflect what you’ve learned?
Mitesh Patel: We’ve definitely started looking into some of the areas where we can enhance instruction to cadets. Maybe add some additional areas of pre-flight briefing, for example.
One of the things we found was, when it comes to weather, cadets were going up and not necessarily been fully briefed on the weather conditions in some cases. And when they come in for approaches or landing or even takeoffs, the weather, whether it’s gusty wind or rain showers or whatever, was having an impact on their performance. So one of the things we did was obviously improve some of the pre-brief prior to the flight. If we can identify somebody struggling with a particular maneuver, then the instructor will work with them to identify how they can improve that particular area of performance.
Rick Adams: You’re working on some products for roll out.
Mitesh Patel: Primarily around CBTA, competency-based training and assessment. The challenge there has been how do we standardize the training, standardize the assessment? Some of the products we’re trying to put in place are there to help instructors with delivery of training but conducting assessments and being able to monitor across a group of instructors – their gradings for particular training modules so we can ensure consistency across the instructor pool.
Rick Adams: What sort of data is going to help you there?
Mitesh Patel: Primarily around a particular maneuver, how they grade the students for particular competencies and then making sure that correlates with the performance of the students. Even though we collect objective data from the simulators for debrief purposes, we’re not using that data to automatically grade the trainee. That is still left up to the instructor to provide that level of assessment.
The data that’s been collected complements the information that instructors noted down during the training.
Rick Adams: What is the ideal – a blend of data and instructor?
Mitesh Patel: In my view, the blend would be where the instructors are using the tools available to capture their grading, their notes, etc., digitally so we can create a pool of data to understand across the pool of instructors, what is the norm and what are the outliers, and then working with instructors to effectively bring everybody to within plus or minus whatever the tolerance is that we’re going to work within.
In terms of the objective data, it’s capturing how a student performed during a particular maneuver, then correlate that with the notes and the assessment the instructor made.
Rick Adams: Is there any kind of resistance from the instructors?
Mitesh Patel: With any change, with anything new, there will be obviously some level of trepidation, concern. That’s just human nature. Change is always difficult for some people. I wouldn’t say everyone. Some people.
Some instructors are extremely happy to embrace it. They see it as a positive trend. Others will see it as too onerous and maybe not something that they want to work with. But ultimately, I think if the industry is moving in that direction; we do need instructors to embrace new technologies and new ways of delivering training and also new ways of assessment. And that’s for the benefit of the trainee.
If you have different instructors grading the same person differently, then you’re not really sort of driving standards up within the industry. And this really is about increasing safety, improving standards. The saying goes, if you can’t measure something, you can’t improve it. And this is all about measuring performance to help improve standards.
Rick Adams: What sort of objective data are you capturing that the instructor is not necessarily capturing in the traditional way?
Mitesh Patel: You can capture all sorts of data off the simulators, any sort of performance parameter you want to capture. You can then derive additional parameters from the ones you’ve captured. It just gives the instructor a bit more detail as to what was going on at the time in the cockpit – both sides, the pilot in command and pilot monitoring. When you go through a debrief, you’ve got the information there to correlate with the notes they’ve taken and the assessment they gave, which then gives the instructor the opportunity to reassess the assessment. It may be they were being too harsh because they hadn’t seen something that had happened or too lenient because of something that they hadn’t noticed, which could have led to a negative outcome. It’s there just to help provide more context and more visibility to exactly what’s going on in the flight deck.
I think with Boeing and Airbus embracing CBTA going forward, we’ll see that being adopted widely within the training centers. That’s a real positive that the two big OEMs are really driving this forward. I think it’s to the benefit of the whole industry that they’ve taken the lead on this and bring the regulators and the various heads of training within the airlines themselves along the journey, which then helps manufacturers like ourselves to develop technologies that are going to be adopted. What we don’t want to do is develop really clever technology that could be used, but the regulatory framework isn’t there to allow us to deploy that technology into the field.
Rick Adams: Do you see the possibility of matching up your flight data analysis with the simulator data and getting some sort of correlation between the simulator training session performance and the actual aircraft performance of the pilot?
Mitesh Patel: Absolutely. We’d be looking at trends rather than specific pilot performance because conditions would be completely different in real life versus what is a relatively controlled environment in a simulator. We have captured data on simulators and overlaid it on operational data that we’ve captured. So the technology is available and we can do that. It’s really understanding what benefits it brings.
At the end of the day, we look at operational data to identify areas where there are potential opportunities for the airline to improve its performance. Start seeing trends across a whole group of pilots, for example, or this particular challenge at an airport – they’re constantly landing long or landing short. We can then take remedial action and develop a particular training module for that in the simulators, and then compare that again to see if that’s had an impact in real operational performance. It’s closing that loop between operational performance and remedial training or additional training that can be provided to the pilots.
Rick Adams: You’re using FDM primarily as a back room analysis for the big picture. Are you trying to create an animation for a flight crew?
Mitesh Patel: We do both. We provide individual flight information and animation of the flight itself for review. We also provide aggregated data sets so the airline can see trends across aircraft types, across fleets, across specific airports. There’s a lot of analysis that can be performed on the data set that we have.
Rick Adams: For an individual flight, what’s the turnaround time to present something to the pilots?
Mitesh Patel: We can complete the analysis within four minutes of the data being uploaded to our servers. It can be pretty instantaneous as long as the hardware has been installed in the aircraft so they can wirelessly transmit the data. And once the aircraft is at the gate, we can post the data, turn it around, and have it available at the airline’s safety office within 4 minutes.
We’re creating an application the pilots will have on their iPads. At the end of the flight, they can log in and have access to their flights, including the one that they’ve just completed. Each pilot can measure how they’ve been performing over the last month, week, day, and across different sectors.
Rick Adams: Does any of this fall into the realm of AI or machine learning?
Mitesh Patel: We’ve deployed machine learning to some extent to do the analysis on the data. We start looking at trends or identifying events within the data that’s been uploaded to us. So instead of having a human go through and perform the analysis, we’ve developed algorithms that automatically churn through the data and look for trends and then present the data accordingly. It’s comparing with parameters and so forth.
Then it starts looking for trends and it effectively self learns, once our analysts say this is what to look for. The machine then starts building its own algorithms to detect those sorts of conditions.
Rick Adams: Are you looking at AI instructors?
Mitesh Patel: Not at this moment. For me, the AI instructor piece would be more relevant to courseware delivery and ground school where you can start picking up how the student is performing as they go through computer-based training or online learning modules, and then adapt the courseware accordingly.
Implementing that into a full-flight simulator, I’m not sure what the benefits would be at this moment in time or how it would actually be deployed. I’m still a firm believer you need a human instructor working in that sort of environment.
The robotic instructor obviously can deliver training instructions, but can it gauge the emotion of the trainees? If somebody is having a bad day can they work with that person to support them through the training program? For me, that’s the piece where I still believe you need human instructors. It’s more than just delivering training instructions. It is the whole training environment.
Excerpted from The Robot in the Simulator: Artificial Intelligence in Aviation Training by Rick Adams, FRAeS – https://aviationvoices.com/shop/