AI and the knowledge loop we may be breaking

Something slightly ironic is happening alongside mass AI adoption.

Platforms that historically produce public knowledge - Stack Overflow (my beloved), Wikipedia, technical blogs - are seeing sustained declines in participation, especially from new contributors. (read - How AI decimated this Prosus company - Moneyweb).

At the same time, AI tools trained on that very body of human-generated knowledge are becoming the default interface for answers.

The real question is whether AI is unintentionally breaking the feedback loop that generates new shared knowledge. Like the Ouroboros of myth and fable, the snake that eats its own tail.

Historically, knowledge creation worked like this:

  • People struggled with real problems
  • They externalised that struggle publicly
  • Others challenged, refined and extended it
  • The resulting answers became part of a shared record

The process may not have been efficient - but it was generative.

But now AI changes the incentives:

  • Questions are asked privately
  • Answers are consumed privately
  • Failures, edge cases, and insights often go undocumented

The work still gets done - often faster to be sure - but the explanations, mistakes, intermediate reasoning, and refinements that used to become public knowledge no longer surface.

"AI doesn't kill innovation by replacing humans - it kills it by making knowledge creation private"

AI systems do not generate fundamentally new knowledge. They can produce novel outputs, but novelty is not discovery. New knowledge requires contact with reality: experimentation, failure, falsification, and consequence.

AI does not observe the world. It does not test hypothesis. It does not discover new ground and truth.

When AI appears to create new knowledge, it is usually recombining, compressing, or accelerating work that humans have already done - often invisibly.

If fewer humans publish explanations, document failures, or debate ideas in public:

  • The training substrate stagnates
  • Novel edge cases go unrecorded
  • Understanding becomes thinner even as output increases

In other words, we may be optimising for short-term productivity while quietly degrading the system that produces long-term innovation.

So where does this go?

This post was developed through extended dialogue with an AI system - intentionally testing whether AI can help surface second order insights without replacing human judgement.

Agree. Gen AI is great at augmenting the person prompting (we’ll assume wisdom and best practices are in use). Getting insights out of the model and packaged for team consumption takes effort. Knowledge sharing has always been challenging. Its easy to post. Much harder to get others to consume unless it addresses something relevant to the receiver (classic What’s In It For Me problem).

Like
Reply

To view or add a comment, sign in

More articles by Dylan Williams

Others also viewed

Explore content categories