Head-Up on Eye-Tracking

Qantas Captain Matt Gray describes techniques and the value of eye-tracking for flight simulator sessions

It took a long time for instructors to adopt the technology; their view is that they can do a pretty good job by guessing where the trainee is looking. And that’s perfectly true. You can sit there and have a pretty good idea of where the student is or is not looking.

What they’re saying is where would I be looking at this time? And if I can see an error starting to creep in, the students should actually be seeing that. So you’d be making a pretty educated guess. They’re not looking at the glide slope or they’re not looking at something like that. And that’s how they’ll debrief. 

I film all of these maneuvers and say with the eye tracker… this is exactly where you were looking. It started to become more and more popular. At least half the instructors are using it on a regular basis. I use it every simulator. But it’s taken a while to get there. It has a great deal of value that they probably didn’t realize.

TRAINING AREAS of INTEREST

I always say to them, we can put you on the eye track – if you guys are happy with that, this is how it works. Pilots are highly competitive. And they love data. They want to know how they’re going. They ask, is that a great scan? Do you think that scan is effective based on the performance that you just did? ‘Yeah, I flew it within the parameters extremely well. Can I have a copy of it please?’ They are 100% absolutely fascinated with this technology because they want to know how they go in comparison to their colleagues.

If I do a four-hour simulator session, I record about 10 minutes worth of footage and anywhere close to the ground. Takeoff, particularly in the early type ratings where you’re just giving them an all-engines takeoff. It’s the rotation rate. Where they’re looking on the HUD is pretty important. Normal takeoffs. Engine failure at V1 or rotate. That’s particularly important because they know pilots tend to over pitch. And on approach, in particular from about 1500 feet down as the ILS [instrument landing system] or the particular maneuver starts to get closer and closer to the ground. I know they’re going to start making mistakes very close to the minima. So about 200 feet, I know they’ll fly out of the slot.

The other one I film almost always is a circuit. A combination of visual flying and instrument flying, and I film about eight minutes worth on a circuit, because generally the mistakes that a pilot will make will be on base or they’ll get too low. They won’t be scanning the extended centerline, they won’t be scanning the glide slope. Anywhere close to the ground is where the eye tracker works extremely well to demonstrate.

What are they seeing is the actual behavior. I use the raw data. The actual movement of the saccade area of interest as it moves around. I don’t use heat maps. That’s of no use to me as a pilot or as an instructor at all. I just look at the actual movement of the eye. What they see is their eyes moving around and we talk about that. And that’s all you really need as an instructor.

The eye track that we’ve got has a small white cross and a magenta circle around it, and I’ll see where their eyes are actually moving over those areas of interest on the HUD. Got a little tail on it, which helps me to see which direction that the eye is moving.

PILOT PERCEPTIONS

Pilots aren’t really good at describing their scan. ‘I was looking at that.’ They weren’t. You weren’t looking there at all. They think they were, but they weren’t actually doing it.

If they were looking at it, it comes down to the problem of, well, the eye tracker says you were looking at it, but they may not perceive the information as it’s displayed. So that leads you down another path that maybe their workload was too high or they don’t understand.

The symbology is not particularly well understood by some people. So the eye tracker is great to give a pilot evidence. Absolutely fantastic for evidence. You can fix a problem straightaway in the simulator. A guy said, ‘I can’t break my habit from the PFD.’ And I said, okay, let’s put the eye tracker on you. And he was quite right. He just had no idea where to look to get the information, to put it together for a particular maneuver. So I filmed it. There you go. There’s your evidence. You need to start looking here, here, here and here. And within 15 minutes, we’d solved this problem that he’d carried around for a couple of years with a certain sense of anxiety that he couldn’t do it.

‘m a great believer in the eye tracker, fixing a problem in the sim and not in the debrief. The debrief, in my view, is too late.

Half the pilots can give you a fairly in-depth appreciation of what they did. Some of them won’t. They say it was pretty good. ‘I probably rotated a bit fast or whatever.’ Okay, I’m going to I’ll replay it, let’s just have a look and you talk me through how that went… And then I’ll airdrop them a copy. They’ll be able to analyze that and have a bit of a practice when they go and do armchair flying at home, and it works extremely well.

RESEARCH NEXT STEPS

It’s very early days, but I’d like to have a look at scan behaviour as it develops over time. A longitudinal study to have a look at a standard line crew, how they go and also how instructors go and whether there are similarities. What sort of overlap do we have? Is everyone scanned much the same or there are outliers? I’ve got enough of the data and we’ve done some trials with one of the universities here to be able to blend the eye tracking data and SOQA [simulator operations quality assurance] data.

Good data on its own doesn’t tell me very much. If we were to blend SOQA data and eye tracking data together, that’ll give you the whole picture of what they see, what the delta is between where they are, where they want to go and what the input is going to be. We actually ran a trial and we were able to blend it. So we’ve now got to go to the next step of getting right down into the source, the highly detailed data of SOQA and X and Y plots, which is a project that’s underway at the moment.

We just really want to know how this process works. And if we can understand that, can we accelerate people’s acquisition of skill? Is it possible to make these guys better? What’s the difference between a novice scan and an expert scan? There are differences between the two.

I think where flight training is going to go, is that we’ve got so many pilots that we have to train and we need that skill acquisition as fast as possible. And I think one of the things will be pilot scan behaviour.

I can see that probably eye trackers will go into airplanes. Eye scan and focus behaviour on an airplane. You match that with the training system. You’ve got a parallel data stream. So what you want to know, if you are running a training system, is do pilots behave in the airplane as they do in the simulator, because you want that transfer of training between the two devices, and the only real way you’re going to be able to do it is if you’re going to have a similar data set, a high-quality data set. Eye tracking in SOQA, eye tracking and FOQA data, just to see if the pilots are doing exactly what they’re supposed to be doing.

And that just feeds back into the constant loop, back into evidence-based training to make sure that develops your training system over time. At the moment, the only way you can really validate the training system is basically safety statistics and all those lower-level areas of validation. I think this next step in aviation is data acquisition.

Excerpted from The Robot in the Simulator: Artificial Intelligence in Aviation Training by Rick Adams, FRAeS – https://aviationvoices.com/shop/