The Deadliest Bugs Aren’t in the Code

Blackbox recordings from the cockpit of Air India flight 171 suggest that the captain cut off the fuel supply to the engines shortly after takeoff, causing the plane to lose power and crash, killing all but one of those on board. It's not clear why he did this, which begs the question, was this a human error—or a design flaw?

There’s precedent: the Kegworth disaster, 1989

On January 8, 1989, British Midland Flight 92 crashed during an emergency landing. A fire in one engine led the captain to shut it down—except he shut down the wrong engine. With no power, the aircraft crash-landed near East Midlands Airport. Forty-seven people died.

The investigation found that while the engine malfunction—due to a broken turbine from metal fatigue—was a factor, the crash was primarily due to human error. But how could a captain with 13,000+ flight hours and a co-pilot with 3,000+ make such a mistake?

An answer could be found buried in the appendices of the Investigation Report

Design

Consider the cockpit of the Boeing 737-400 (on which the captain had just 23 hours of experience). The Vibration Warning display that could’ve alerted him was small, poorly placed, and radically different from the earlier 300 series. It used a tiny LED needle around a coin-sized dial, unlike the previous large mechanical pointer. This made it less visible, less intuitive, and easy to miss under stress.


Article content
Cockpit image taken from the Investigation Report. Analogue dial on the old plane (left), digital dial on the new (right)

Instrumentation layout compounded the issue. Engine dials were not aligned with their corresponding power levers, breaking the left-right mental model pilots relied on. Primary and secondary instruments were separated, forcing pilots to mentally transpose data between sides—an error-prone setup, especially under pressure (left hand image below). While it acknowledged trade-offs, it emphasised that this layout was far from optimal. An alternative design in the report aligned dials with levers, preserving spatial logic (right hand image). It's a simple and obvious improvement to make, but sadly when it comes to design decisions, simple and obvious are often neglected.

Article content
Control layout. Picture on right shows design improvement as suggested in investigation report

Is it too easy to cut off the fuel?

If design played a contributing factor to the Kegworth disaster, could it be implicated in the Air India crash? At first glance it is certainly a possibility. The fuel cut off switches are right below the engine thrust controls. The metal guards to 'prevent' the fuel switches being accidentally touched suggest this is a work-around to an obvious design flaw. It goes against Fitts Law that states that frequently used things should be placed together, and space things apart that shouldn't be accidentally used together.


Article content
Thrust lever with fuel cut off switches directly below. Image credit:


But look at this a different way. If there is a problem with the engines, a fire for example, you want the pilot to be able to cut off the fuel as quickly as possible. In this case Fitts Law works; it makes perfect sense to place the emergency control in close proximity to the main controls. That said, those fuel cut-off switches aren't just there for a emergency. Pilots use them to switch the engines off when they have landed, so cognitively they are not associated with emergency.

Human Error Is a Design Problem

A full investigation into the crash of Air India 171 is still underway. Human error may be identified as the cause—but humans rarely make mistakes in isolation. When you keep asking why, the trail often leads not just to the individual, but to the system they were operating within—to design and human factors.

The lessons from Kegworth—and potentially Air India—pose an urgent question for every designer, engineer, founder, and executive: Are we designing systems that account for how humans actually behave—especially under stress?

It’s tempting to blame “human error,” but that’s often a surface symptom. The deeper causes are usually structural: mismatched mental models, confusing layouts, ambiguous controls, and a failure to anticipate how real decisions unfold under pressure.

Designing for failure doesn’t mean eliminating all errors or over-engineering every edge case. It means recognising which mistakes are inevitable, which ones matter, and which could be catastrophic—and building accordingly.

Whether you’re designing aircraft, dashboards, or software products, the principle is the same: Consider the human and their behaviour, especially when they’re at their worst.

Because sometimes, the deadliest bugs aren’t in the code. They’re in the design.

"Are we designing systems that account for how humans actually behave—especially under stress?" such a cool question to ask. We see so many people contact us for information that is accessible in the app (and then automate a lot via Sandy). Listening to the calls you really hear how our customers often just want in reassurance for what is a really an important purchase. Something that could be designed better into our post booking experience.

To view or add a comment, sign in

More articles by Marc McNeill

Others also viewed

Explore content categories