💡 The 'Legitimate Problem' Gambit
Proportionate response

💡 The 'Legitimate Problem' Gambit

How we use real problems to justify poor solutions

We’ve all felt it. A serious, undeniable problem surfaces. Misinformation is rampant. Online safety is at risk. Productivity is flagging. The pressure builds, and a powerful consensus forms: "We must do something."

This urgency is the birthplace of a dangerous pattern I call the 'Legitimate Problem' Gambit.

The gambit is a form of soft logic, a fallacy that bypasses critical thinking. It works like this:

  1. Identify an objectively legitimate problem (a fact nobody can argue with).
  2. Propose a solution (often technological, fast, and scalable - it sounds good).
  3. Anchor all debate to the problem, not the solution.

Any criticism of the solution is reframed as ignorance or apathy toward the problem. “Clearly you hate children, puppies and the elderly”. Or, “You're against this AI moderation tool? So you're pro-misinformation?"

This rhetorical trick bullies us into accepting solutions that are expedient, ineffective, unfair, or simply misaligned with the very problem they claim to solve. For the humane technologist, learning to spot and dismantle this gambit is one of our most crucial skills.




Article content
A familiar Gambit

Three Examples of the Gambit in Action

This fallacy is everywhere in tech policy, product design, and corporate management. It helps to visit some examples so here are three.

Problem 1: Remote Work Productivity

How can you be sure that your remote workers are being productive little monkeys and not sitting around in their underwear, streaming YouTube Anime clips all day? They say that putting together the quarterly report takes ten hours, but does it?

The Legitimate Problem: Companies are genuinely struggling to manage distributed teams, measure performance, and ensure efficient workflows. They worry about "quiet quitting" and wasting time.

The 'Gambit' Solution: Implement "bossware" or "tattleware." This is surveillance software that monitors employee activity via keystroke logging, mouse movement tracking, periodic screenshots, or even webcam activation.

The Mismatch: This solution provides a metric, but it's poorly aligned with the actual goal. Activity does not equal productivity. Value is not measured in clicks. This approach is ineffective (it encourages "performative work" like jiggling a mouse) and deeply unfair (it obliterates trust, autonomy, and morale, which are the real drivers of high-quality, creative work). It is also a point of no return, there will be no real trust afterward and without trust, you will never see robust communication.

Problem 2: Harmful Online Content

The Legitimate Problem: The spread of child sexual abuse material (CSAM), terrorist recruitment, and violent extremist content is a profound and urgent social harm.

The 'Gambit' Solution: Mandate "client-side scanning" or create "backdoors" in end-to-end encryption. This involves placing software on every user's device (phone, computer) to scan all their private messages, photos, and files before they are encrypted and sent.

The Mismatch: This solution is sold as a "necessary tradeoff" for safety. But it is fundamentally ineffective (criminals will simply move to non-monitored, custom platforms), unfair (it treats every single user as a suspect and destroys digital privacy for all), and poorly aligned (it creates a new, massive security vulnerability that state-sponsored actors and hackers will inevitably exploit, making everyone less safe).

Problem 3: Misinformation and Disinformation

The Legitimate Problem: "Fake news" and co-ordinated inauthentic campaigns can destabilize elections, harm public health, and incite violence.

The 'Gambit' Solution: Deploy massive, opaque, automated AI content moderation systems. These algorithms are trained to identify and "down-rank," "shadowban," or remove content at a scale no human team could manage.

The Mismatch: This is expedient. It allows platforms to "do something" cheaply. But it is ineffective at scale. These systems famously struggle with nuance, satire, and cultural context. Worse, they are unfair, disproportionately silencing marginalized voices, health activists, and artists whose language patterns deviate from the training data, all while offering little to no recourse or transparent appeals process.  




Article content
Movie, Idiocracy. Plants dying. Solution = water them with Gatorade sounded good because, you know, 'electrolytes'.

🧠 The Psychology: Why We Fall for It

The 'Legitimate Problem' Gambit works because it exploits our cognitive biases for simple, clear and at-hand solutionism.

Action Bias: We have a psychological preference for action over inaction, even when inaction is the wiser choice. Doing something feels like progress and gives us a sense of control, even if that "something" is counter-productive.

Imagine, instead of just waiting 15 seconds for the high-speed internet to catch up (the wiser, inaction choice), you must do something. You aggressively mash the pause button, then the play button, then you exit the app and immediately reopen it, only for the video to resume right where it was, a few seconds after you exited. You've wasted 30 seconds, accomplished nothing, and possibly annoyed your partner. But hey, you fixed it! (Except you didn't.)

Cognitive Dissonance: It is deeply uncomfortable to hold two conflicting thoughts: 1) "This is a terrible problem," and 2) "I am doing nothing about it." Accepting any solution, even a bad one, resolves this internal tension. This has political implications as well.

Availability Heuristic: The problem is often vivid, emotional, and easy to recall (e.g., a viral story about a child being harmed). The harms of the solution are abstract, statistical, and delayed (e.g., "chilling effects on free expression" or "long-term security vulnerabilities"). We over-index on the immediate, available threat. It’s just cognitively easier.




Article content
Not on Monday

👥 The Sociology: Why We Deploy It

In groups and organizations, these individual biases combine into systemic failures.

The "Politician's Syllogism": This is the classic logical error:

  1. We must do something.
  2. This is something.
  3. Therefore, we must do it.

This is the logic of performance. It's about being seen to be taking a problem seriously. The press release announcing the "new AI initiative" is often more valuable than the initiative's actual effectiveness. I’ll wait while you look up a few examples in your own org.

Solutionism: Especially in the tech world, we are culturally biased to believe that a scalable, technical (magical) fix exists for every complex social problem. We jump to "What can we build?" before ever asking "Is technology the right tool here at all?"

Misaligned Incentives: The person proposing the solution (a vendor, a product manager) is often incentivized by metrics like adoption, speed, or cost reduction, not by actually solving the root problem. The bossware vendor gets paid whether or not your company's culture rots from the inside out. In many cases the proposer benefits infinitely more if the problem persists.



Article content
Wait a minute...

🎯 The Humane Technologists Response

Our job is to be the friction in this process. 

We gotta be the voice that separates the problem from the solution and holds them both up to the light.

When the 'Legitimate Problem' Gambit is played, we must stop and ask the hard questions:

Q1 | Who bears the cost of this solution? 

Are we punishing the many for the actions of a few? If academic, are we passing the cost on to the student, if business- the customer? 

Q2 | Does this actually address the root cause, or just a symptom? 

Is this a band-aid on a bullet wound? Are we closing half the doors on a submarine?

Q3 | What are the second- and third-order consequences? 

How will this tool be abused once it exists? Humans game systems, it's what they do.

Q4 | How will we measure success? 

And is "activity" the same as "value"? Is it reasonable to measure success in a meaningful way? Can it be measured quickly enough to inform our choices?

Q5 | What is the non-technological alternative? 

Does this require a policy, a conversation, or a cultural change, not a new piece of software? Maybe both, in tandem, requiring coordinated efforts across silos?

Pointing at a real problem does not give anyone a blank check to implement a bad solution. 

Our responsibility is not just to "do something," but to do the right thing.





In the spirit of transparency, Brian Arnold used a PC, a keyboard, spellchecker, Gemini, three advanced degrees and electricity to write this post.

Brian Arnold Ph.D. your focus on second and third order consequences is exactly the kind of systems thinking we need more of right now. 𝗧𝗵𝗶𝘀 𝗶𝘀 𝗳𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻𝗮𝗹. When we rush to solve surface-level problems without mapping downstream impacts, we're not just missing opportunities, we're 𝘮𝘢𝘯𝘶𝘧𝘢𝘤𝘵𝘶𝘳𝘪𝘯𝘨 𝘯𝘦𝘸 𝘧𝘢𝘪𝘭𝘶𝘳𝘦 𝘮𝘰𝘥𝘦𝘴. Human-AI co-intelligence doesn't just expand our solution space, it creates a forcing function for intellectual rigor and helps us pressure-test assumptions before they become expensive mistakes. 𝗠𝘆 𝗣𝗹𝗮𝘆𝗯𝗼𝗼𝗸 𝗣𝗿𝗼𝗧𝗶𝗽, 𝗵𝗲𝗿𝗲: Before greenlighting any AI initiative (or any major initiative), run a "𝘤𝘰𝘯𝘴𝘦𝘲𝘶𝘦𝘯𝘤𝘦 𝘤𝘢𝘴𝘤𝘢𝘥𝘦" 𝘸𝘰𝘳𝘬𝘴𝘩𝘰𝘱 where your team maps at least 𝘁𝗵𝗿𝗲𝗲 𝗹𝗮𝘆𝗲𝗿𝘀 𝗼𝗳 𝗶𝗺𝗽𝗮𝗰𝘁, then use AI to stress-test the blind spots you didn't see coming. 𝙃𝙤𝙬 𝙖𝙧𝙚 𝙮𝙤𝙪 𝙗𝙪𝙞𝙡𝙙𝙞𝙣𝙜 𝙘𝙤𝙣𝙨𝙚𝙦𝙪𝙚𝙣𝙘𝙚 𝙢𝙖𝙥𝙥𝙞𝙣𝙜 𝙞𝙣𝙩𝙤 𝙮𝙤𝙪𝙧 𝙩𝙚𝙖𝙢'𝙨 𝙙𝙚𝙘𝙞𝙨𝙞𝙤𝙣 𝙖𝙧𝙘𝙝𝙞𝙩𝙚𝙘𝙩𝙪𝙧𝙚?

  • No alternative text description for this image

To view or add a comment, sign in

More articles by Brian Arnold Ph.D.

  • Pokémon Know: What players were told, what Niantic understood, and why the gap matters

    There is a particular kind of discomfort that arrives not when you learn you were lied to, but when you realize you…

    2 Comments
  • The Humane Technologist Week in Review May 1, 2026

    Are We Functionally Intoxicated? In an era of frictionless AI, we’ve found the "Alcarelle of the mind": all the buzz of…

  • Bore-Smart

    Bore-Smart not Bore-Dumb. Stimulation is not Cogitation.

  • Low Battery

    Keeping It Human When the Bandwidth Shrinks The room feels like it’s getting smaller, doesn’t it? Like we’re all being…

  • Low Battery

    Keeping It Human When the Bandwidth Shrinks The room feels like it’s getting smaller, doesn’t it? Like we’re all being…

    1 Comment
  • Technology Powered by Trust: Humane Mental health

    We have all been there. It is 2:00 PM on a Tuesday.

  • Thinking Under the Influence

    Let’s indulge in a little barstool philosophy. Picture me leaning against the mahogany, swirling a tumbler of something…

  • Dead Wrong

    The West Can't Look Death in the Face, and It's Costing Us There is something I would like you to consider, if you're…

  • The Humane Technologist Week in review April 24, 2026

    Ever feel like you are playing a game where everyone else knows the rules but you? It turns out that most of us are. We…

  • The Garage Sale Glitch: Vendor Lock

    Let’s talk about a specific type of madness that happens on Saturday mornings in driveways across the world. It is the…

Others also viewed

Explore content categories