Human Error by Design

Human Error by Design

What if the greatest threat to your cyber security program isn't sophisticated malware, but the very humans designing, implementing, and operating it?

This month, we've been exploring a troubling pattern: organisations investing millions in cutting-edge security technology while systematically undermining their own defences through testing approaches, vendor relationships, tool addiction, hiring practices, and leadership behaviours that create more risk than they prevent.

From security teams trapped in "red bubble" notification loops to AI-powered recruitment fraud placing unqualified analysts in Security Operations Centres, we seem to be fighting tomorrow's cyber threats with yesterday's human-centric assumptions.

But there's hope. The same human factors that create these problems can also solve them, if we're willing to acknowledge the patterns and build better approaches.

Together with brilliant industry experts, we uncover insights into why these problems exist and ways to fix them.  

Maturing Supply Chain Risk Management in the AI Era

Security certifications are creating a dangerous false sense of security, particularly as AI systems demand unprecedented access to organisational data.

Dan Haagman 's collaborative essay with Clyde Netto , Director and CTSO at Thomson Reuters, reveals how organisations applying rigorous internal security standards routinely accept basic compliance reports when trusting third parties with identical sensitive data, often with catastrophic consequences.

Five problems they identified:

  • Single-factor security — Relying on SOC 2 and ISO 27001 certificates is like using single-factor authentication—there's a gate with a lock, but zero visibility into what's actually happening behind it.
  • AI's data appetite — Where traditional integrations accessed specific datasets, AI systems now require comprehensive organisational access: entire email systems, document repositories, and decades of historical archives.
  • Supply chain of supply chains — Modern vendor assessments reveal cascading dependencies where "everything connects to a certified third party, then another certified third party," creating risk exposure that compliance reports cannot capture.
  • Questionnaire theatre — Generic DPA questionnaires have become meaningless box-ticking exercises, with some providers now handling cyber security assessments on clients' behalf to "lift the burden".
  • Professional maturation — The cyber security field remains trapped in organisational thinking rather than collaborative professional development, limiting industry-wide progress on modern threats.

But the authors are not just talking about problems. They’re also offering actionable solutions based on their extensive experience. Read them below.

This vendor assessment crisis connects to an even more widespread problem: the very tools we depend on for security visibility are systematically corrupting judgment and priorities.

How Cyber Security Tools Are Hijacking Our Priorities

If our testing and vendor assessment approaches are flawed, the tools designed to support these functions are actively making things worse.

In a valuable collaboration, Chaleit’s CEO Dan Haagman and Brock Maus , SVP of Information Technology at NAU Country Insurance Company, explain how security platforms exploit the same psychological mechanisms as social media to create addictive cycles of reaction without reflection.

Insights about tool-driven distraction:

  • Dopamine-driven security — Security dashboards are deliberately designed like smartphones, with red bubble notifications that exploit brain reward systems and pull teams into reactive cycles.
  • Productivity paradox — Security teams now spend more time managing tools than actually defending against threats, with the average organisation juggling 60-70 security platforms.
  • Invented work — New security tools don't solve problems, they create workloads that dominate planning and resources, forcing teams to hire engineers just to manage tool-generated noise.
  • Red bubble trap — While teams obsess over dashboard metrics, the most dangerous vulnerabilities (misconfigurations, exposed credentials, basic authentication failures) exist outside expensive security programmes entirely.

As Brock observed, "You think you're buying visibility, but what you're actually buying is a workload that creates tasks to dominate planning rather than addressing meaningful risks."

The solution requires deliberate creation of "space to think", among other ideas Dan and Brock discussed. Find them below.

These systemic issues with tools and vendor relationships create a perfect storm when combined with broken approaches to security testing, which brings us to practical solutions.

From Perimeter to Context-Driven Validation

Given the vendor assessment gaps and tool-driven distractions we've explored, how can organisations build security validation that actually reflects reality?

Our pen testing methodology investigation reveals how to move beyond CVSS-driven theatre toward testing that distinguishes between theoretical exposure and practical risk. This comprehensive article builds upon our previous penetration testing guide and the smart buyer’s perspective.

Here is the main problem: most traditional penetration testing operates in a vacuum, hunting vulnerabilities without understanding the environment in which they exist.

Gaps in current testing approaches:

  • CVSS scoring — A 9.8-rated vulnerability in a sandboxed environment poses minimal risk, while a 6.5-rated issue on a DMZ system connected to core business processes represents catastrophic exposure—yet traditional testing treats them identically.
  • Internal architecture blindness — Most testing treats internal network access as "game over," ignoring that real damage happens during lateral movement through flat networks with excessive trust relationships.
  • Attack surface evolution gap — Traditional perimeter-focused testing completely misses cloud IAM misconfigurations, API security contexts, and distributed workforce attack vectors where actual breaches occur.
  • Detection validation void — Testing identifies vulnerabilities but never validates whether your SOC can detect exploitation, leaving organisations with extensive vulnerability lists but zero insight into actual attack detection capabilities.
  • AI security testing blind spot — As AI systems require unprecedented data access scope, traditional methodology cannot address cross-site scripting via AI, system information disclosure, and authorisation control bypass.

The methodology we propose requires shifting from generic vulnerability hunting to context-aware validation, which integrates business understanding, architectural reality, and threat intelligence.

Only by testing how vulnerabilities can actually be exploited in your specific environment can you distinguish between theoretical exposure and practical risk.

Read the full article for solutions and implementation guidance.

For the shift to better security to happen, organisations must escape the annual compliance ritual that provides false confidence while missing real risks.

From Pointless to Practical in Pen Testing

Given the methodology problems, vendor assessment gaps, and tool-driven distractions, how can organisations escape false confidence and move towards effective pen testing?

We talked to Jim Newman , CISO at Capco, who provided valuable insights into why most penetration testing fails to deliver genuine value, and how to transform it into a strategic security capability.

Five insights for practical security testing:

  • Establish continuous validation — Replace annual snapshots with ongoing security validation that adapts to your continuous deployment cycles and infrastructure changes.
  • Build partnership-based relationships — Move from project-based procurement to retainer arrangements that provide immediate access to security expertise when urgent questions arise.
  • Scope for business impact — Focus testing on critical business processes and data flows rather than arbitrary system boundaries.
  • Demand context-rich findings — Require testing reports that explain vulnerability impact within your specific architecture, existing controls, and business context rather than generic CVSS scores.
  • Enable developer collaboration — Ensure testing teams work directly with development teams to validate fixes immediately and provide ongoing security guidance rather than annual criticism.

This approach transforms testing from a compliance checkbox into what Jim Newman described as "an extension of what we have": an ongoing security capability that strengthens your defences rather than simply documenting weaknesses.

However, even the best testing approaches cannot compensate for a more fundamental vulnerability: the people operating our security programmes may not be who we think they are.

How AI is Turning Hiring Into a Security Vulnerability

While we focus on testing methodologies and vendor assessments, a more insidious risk walks through front doors every day: people hired through increasingly compromised recruitment processes.

Dan Haagman’s discussion with talent expert Anton Roe 🚀 uncovers a disturbing trend where AI is systematically removing human judgment from hiring decisions, creating a dangerous insider threat vector.

Five alarming discoveries:

  • Robot vs robot — We now have AI writing CVs specifically designed to fool AI screening systems, with human assessment systematically eliminated from the process.
  • The will vs skill crisis — Critical thinking and determination matter more than credentials in cyber security, but AI cannot measure the "will" to dig deeper when something doesn't look right
  • Flawed incentive structures — With 95% of recruitment agencies having fewer than five employees and paid only for outcomes, there's pressure to fill SOC positions quickly with potentially unqualified candidates.
  • Supply chain recruitment risks — Your cyber security vendors may be providing people who aren't who they claim to be, creating an entirely new category of attack vector.

Anton's observation offers food for thought:

"Some of the leading indicators of compromise in major organisations come from the lowest level." If we're hiring people who can't think critically because they've AI-enabled their way through recruitment, we're systematically weakening our first line of defence.

Read the full piece for more valuable insights into how to deal with this issue.

This hiring crisis affects not just individual contributors but extends to the leadership level, where CISOs face their own set of challenges in building credible security programmes.

From Zero Budget to Strategic Partner

Even with qualified people in place, security leaders face the challenge of transforming from "Department of No" to strategic enabler — a transformation that's essential for survival but difficult to execute.

Josh Fulford 's conversation with Olivier Busolini , Group Head of Information Security at Mashreq, shows how modern CISOs are fundamentally rewriting the playbook on security leadership.

A few key highlights (and full article here):

  • Zero to hero budget transformation — Leading from the front and personally attending business projects transforms perception from overhead to enabler, but it requires the CISO to show up, not just send delegates.
  • Strategic agility over rigid planning — "The business is far more agile than a five-year plan," Olivier observed. Successful security programs pivot priorities every six months while maintaining strategic direction.
  • Change management gap — Technical rollouts succeed, but most organisations completely miss the human element of taking entire workforces on a cyber security journey they never signed up for.
  • 18-month reality — Real cultural transformation in cyber security takes over a year and a half of relationship building — there are no shortcuts to trust and credibility.

The change requires patience, personal investment, and what Olivier called "following your passion", the internal drive that sustains leaders through inevitable difficult periods.

However, even successful CISO transformations can be undermined by subtler forces operating below conscious awareness.

Silent Erosion: When Human Factors Undermine Security Programs

All the technical improvements, leadership transformations, and process optimisations we've discussed can be slowly destroyed by human factors that operate below the surface.

Dan Haagman’s live conversation with CISO Lee Barney revealed how even brilliant security initiatives fail not through dramatic incidents, but through a thousand small compromises that accumulate over time like coastal erosion.

Five patterns of programme erosion:

  • De-scoping — Technical debt and legacy systems (the very elements most likely to enable compromise) get systematically removed from programme scope to ensure "success", often without informing security teams.
  • Washing machine effect — Organisations repeat failed initiatives because institutional memory departs with key personnel, creating endless cycles of rediscovering identical problems.
  • Trust but verify — Processes that appear robust on paper have quietly deteriorated in practice.
  • Vulnerability paradox — In a field where security leaders must project confidence against threats, admitting mistakes feels career-limiting, yet authentic vulnerability creates the psychological safety needed for teams to surface problems early.
  • Stress inoculation vs catastrophisation — Building genuine resilience through preparation and scenario planning, while avoiding the endless "what if" spiral that becomes counterproductive.

Lee's military insight proved particularly valuable:

"You can build up an inoculation to stressful situations by going through stressful situations." The key is preparation and simulation, not endless worry about theoretical scenarios.

Breaking the Cycle

Breaking this cycle requires uncomfortable changes. We must admit that current approaches aren't working, challenge assumptions that feel fundamental, and invest in capabilities that resist easy measurement.

Most importantly, we need to remember that cyber security is ultimately a human endeavour, one that succeeds or fails based on the judgment, creativity, and resilience of the people involved.

The technology will continue evolving rapidly. Compliance frameworks will proliferate and become increasingly complex. But the human factors that determine whether security programs actually work or simply appear to work will remain constant.

That's where the real work begins.

At Chaleit, we work as strategic partners with organisations ready to move beyond mere compliance toward genuine security capability. If these insights resonate with your experience, we'd welcome the opportunity to continue the conversation.

To view or add a comment, sign in

More articles by Chaleit

Others also viewed

Explore content categories