Human Factors in Engineering Ethics

Explore top LinkedIn content from expert professionals.

Summary

Human factors in engineering ethics refers to the study of how human behavior, decision-making, and organizational culture influence ethical outcomes and safety in engineering practice. This concept reminds us that engineering failures often stem from system weaknesses, not just individual errors or lapses in character.

  • Prioritize clear accountability: Establish strong escalation paths and transparent consequences to help people make ethical choices under pressure.
  • Design for real-world adaptation: Build systems and procedures that anticipate human error and support workers’ ability to recognize and communicate risks.
  • Shift focus to system improvement: Move away from blame and investigate how organizational structures and processes can be redesigned to reduce mistakes and encourage ethical behavior.
Summarized by AI based on LinkedIn member posts
  • View profile for Chris J. Griffin

    Partnership lead for UK, Ireland & EMEA. Working with businesses in AI Regulatory Compliance, Golf Technology and Golf Course Robotics | AI, GNSS, CRM | Former England Golfer.

    16,655 followers

    The Engineer Who Predicted a Catastrophe (and was ignored) In July 1985, Roger Boisjoly wrote a memo that haunts the engineering world to this day. He warned his managers that if they didn't fix the O-rings on the Space Shuttle, it would lead to a "catastrophe of the highest order - loss of human life." Six months later, the Challenger exploded. The Night Before the Launch On January 27, 1986, the temperature at Cape Canaveral plummeted. Boisjoly and his team at Morton Thiokol knew the math: cold weather made the rubber O-rings brittle. If they didn't seal, the boosters would become blowtorches. He pleaded with NASA and his own management to scrub the launch. "Take Off Your Engineering Hat" The pushback was immediate. Under intense schedule pressure, a senior manager told the team it was time to "take off their engineering hats and put on their management hats." The engineers were overruled. The launch was a "go." The Aftermath of Integrity Boisjoly watched the explosion on a TV screen in Utah. He had predicted exactly what happened, down to the second. When he spoke the truth to the Presidential Commission, the retaliation was swift: He was sidelined at work. Colleagues stopped speaking to him. He eventually resigned due to the hostile environment. The Lesson for Leaders Today Roger Boisjoly spent the rest of his life teaching ethics. He didn't just teach math; he taught the courage to say "No" when the room wants a "Yes." The takeaway for us: Psychological Safety Saves Lives: If your experts are afraid to speak, your "success" is just a countdown to failure. Data Doesn't Care About Deadlines: "Management hats" should never replace technical reality. Integrity is Lonely: Doing the right thing often comes with a cost. Roger died in 2012, but his memo is still required reading in engineering schools worldwide. He reminds us that silence is a choice - and sometimes, the most dangerous one we can make. #Leadership #Ethics #Engineering #PsychologicalSafety #Challenger

  • View profile for Noel Darcy

    Global HSSE Director leading multi-country aviation and logistics operations, setting enterprise safety standards, SMS, and risk frameworks to prevent serious harm and high-potential operational events.

    13,247 followers

    Safety Breaks Down When We Treat Risk and People as Separate Problems Safety research has shown for decades that incidents are rarely caused by a single unsafe act or a single failed control. Studies in human factors consistently demonstrate that accidents occur when system weaknesses and human performance interact — not because people simply “don’t follow the rules.” Engineering controls, procedures, and barriers are critical. They reduce exposure and make work safer by design. But research into real operations shows that work is dynamic, pressured, and variable. People adapt constantly to get the job done. At the same time, focusing only on behaviour misses the point. Even well-trained, competent people are vulnerable when systems are poorly designed or risk becomes normalised. The organisations that reduce incidents most effectively balance both: • human-centred system design that anticipates error • and capable, supported people who can recognise and speak up about risk The shift happens when the question moves from “Who failed?” to “What allowed this to happen?” That’s where real safety improvement starts. #Leadership #Safety #RiskManagement #SafetyCulture #HumanFactors #OperationalExcellence #SystemsThinking #WorkplaceSafety #AviationSafety #OperationalSafety #FutureOfWork #JustCulture #LearningCulture #Operations #SafetyLeadership

  • View profile for Jim Woods

    CEO, Seattle Consulting Group | Creator of The Strength Trap™ | Advisor to Leaders on Performance, Conduct & Risk | Architect of the HR Power Model™

    17,832 followers

    Most organizations still treat ethics as a character issue. Hire better people. Provide training. Reinforce values. Yet major ethical failures rarely begin with openly unethical individuals. They begin with ordinary professionals operating inside poorly designed systems. Unethical behavior often emerges when organizations unintentionally create three conditions: • Authority without visible accountability • Performance pressure without procedural guardrails • Standards that exist — but are not operationally enforced Over time, people adapt to the system they experience, not the values written in policy. This leads to a difficult but necessary conclusion: Ethics ultimately becomes less a question of character and more a question of organizational design. When escalation paths are clear, documentation is disciplined, and consequences are predictable, ethical behavior becomes the default outcome — not a heroic exception. Culture matters. Leadership matters. But structure decides what actually happens when pressure rises. The organizations that avoid ethical crises are not morally superior. They are better engineered.

  • View profile for Martijn Flinterman

    Risk & Safety / Sociology

    8,674 followers

    In Human Error (1990), the late James Reason offers a deep dive into the psychology of human mistakes. Over three decades later, his insights remain relevant, whether you're dealing with medical errors, AI design, or aviation safety. Here are a few lessons I took away: Errors happen on three levels 1. Skill-based (slips/lapses); routine actions gone wrong; 2. Rule-based; applying the wrong rule to a familiar situation; 3. Knowledge-based; flawed reasoning in unfamiliar territory. Cognitive underspecification When situations are unclear, our brain defaults to what seems to fit; “I did it automatically.” These automatic processes, similarity-matching and frequency-gambling, are incredibly efficient but also prone to failure. The ‘fallible machine’ If you build a thinking machine that works like the human brain, it will also make human-like mistakes. Reason’s simulations show that our brain relies on smartly organized, but occasionally error-prone, knowledge networks. Error detection: easier said than done We quickly correct low-level errors, like posture or speech. But at higher levels, like planning and reasoning, errors are much harder to spot. Timely and high-quality feedback is important. Disasters rarely stem from a single error The greatest risks come from latent errors in systems; poor design, weak training, flawed decision-making. For instance, Chernobyl, Bhopal, or Challenger weren’t single catastrophic mistakes, but accumulations of small, hidden failures. How do we reduce risk? - Error-tolerant design; - Intelligent decision support; - Awareness of error traps in systems and workflows. Reason ends on a sobering note, when he writes that not all risks can be eliminated, especially those emerging from group dynamics and organizational structures. Human error is a byproduct of how our brain normally works. Understanding this is important to building safer systems, better designs, and collaboration. Reason, J. (1990), Human Error, New York: Cambridge University Press #JamesReason #HumanFactors #CognitiveScience #SystemDesign

  • View profile for Paul Chivers

    Independent Risk Advisor

    4,901 followers

    Human error isn’t the cause of most incidents, it’s the clue that something in the system needs redesign. We still spend too much time blaming individuals and not enough time understanding how work is really done, how decisions shape performance, and how design sets people up to succeed or struggle. It’s time to shift from fault-finding to system-shaping. When we explore how work is performed, not how we imagine it should be performed, we uncover the real levers that improve safety, resilience, and operational reliability. This is where mature organisations differentiate themselves: Not through blame, but through better design. #SafetyLeadership #HumanPerformance #SystemsThinking #SafetyCulture #RiskManagement #DesignForSafety #OperationalExcellence #WorkAsDone #HumanFactors #SafetyII #ResilienceEngineering #LearningCulture #DonNorman #TrevorKletz #WHS #OHS #risky #OrganisationalLearning

  • View profile for Sean Smith

    Founder & Publisher | MedTech, Life Sciences, HealthTech | MDR/IVDR, QA/RA | Leading Voice Program | Worker 🐝

    17,175 followers

    Human factors don’t come up every day, but when they do, they can change how we view design and risk. 👇👇 In case you missed my interview, last week, I sat down with self-described overachiever Annmarie Nicolson to explore why. Annmarie is the founder of We are Human, a ClariMed Company, recently acquired by ClariMed, Inc.. We talked about her journey from being laid off at the beginning of the pandemic and how she started consulting to: - building a team from scratch, - opening a human factors lab inside a medical center, - and what drives her to keep pushing the field forward. One story, in particular, stuck with me. In a validation study, a caregiver in his seventies faced a simulated critical alarm: • Canister filled with red liquid 🩸 • Alarm lights flashing 🚨 • He said everything looked fine Turns out, he was colorblind. The device relied entirely on color cues without text. Observation revealed what documentation could never show. 🔥 If you’ve ever had a human factors moment like that — one that changed the way you think about design or risk — share it in the comments. I’d love to hear it. Here are 5 more takeaways from my conversation with Annmarie: 1️⃣ Human factors is a safety discipline. It addresses user-related risks before they reach patients. 2️⃣ Early-stage research changes outcomes. Contextual inquiry and formative testing prevent issues from locking into the design. 3️⃣ Observation captures hidden risks. (Like the colorblind caregiver case.) 4️⃣ Standards strengthen both design and submissions. IEC 62366 and ISO 14971 link usability with risk management. 5️⃣ Simplicity takes expertise. An intuitive device is the result of deliberate, skilled work. 📄 Read full interview below #humanfactors #medtech #regulatoryaffairs #qualityassurance #usabilityengineering #mlvx Kelley Kendle Victoria S.

Explore categories