Knowledge workers’ future is about building context

Knowledge workers’ future is about building context

You urgently need to adapt your ways of working to be in the best position to embrace AI.

There used to be a time when notes could be taken on fleeting paper and forgotten. When work could be done in your email client, on whiteboards, or in any app — all in a disconnected way.

A huge shift is now underway in how work is conducted, as AI models continue to advance. I don’t mean to sound alarmist or sensational, but everyone really needs to grasp that this is quickly becoming reality.

What will differentiate us from agents? Why would a company still need humans?

I can’t predict what the post-AGI era will look like.

However, before AGI — in the final phase of work as we know it today — our roles will likely more and more focus on data and context.

To fully leverage AI models, you must be able to translate tacit knowledge (what’s in your head, implicit) into explicit knowledge (what you can express and communicate) in a way that’s natively understandable for AI models.

The shift toward voice-first interfaces will, I believe, become an immensely useful development in this effort — as I previously hinted here.

A good mental model is to view that knowledge as your personal knowledge graph — patiently curated, with data points and insights linked across each node. Fortunately, Zettelkasten and other “second-brain” methodologies offer unique and proven ways of organizing that knowledge.

At its core, Zettelkasten urges you to turn your research and insights into original notes that push your thinking forward.

You must not only read data — you must re-express it through writing.

This acts as a forcing function, ensuring you’re not just parroting information but actually internalizing it. When you finish writing a note, a crucial step is to connect it with prior knowledge by linking it to other notes.

By doing so, you continually enrich the context — one note at a time.

This is also why people who embrace LLMs will leapfrog others.

It’s not only that they benefit from multiplied outputs — it’s that prompting fifty times a day forces them to reformulate, and thus to better understand, what they’re working on.

That’s one of the reasons I don’t buy the “LLMs will make us dumber” argument.

Back in the 1960s, Niklas Luhmann — the creator of the Zettelkasten method — implemented it using paper, indexes, in large physical cabinets.

Today, the digital version can be built in evolved note-taking tools. Personally, I use Obsidian.

What was true with the earliest implementations of RAG remains true today: these are garbage-in, garbage-out systems.

If you don’t take care of your data — if your notes are unclear or disconnected — you can feed them to an LLM all you like, but the results will still be poor.

Patiently and consistently curating your insights, drawn from human intuition, will create genuinely new perspectives.

These are what LLMs can leverage to navigate their vector spaces in a more meaningful way.

Data retrieval, reformulation, manual formatting, and other “mechanical” actions will soon be fully commoditized by AI. Jobs that depend on these will clearly be at risk and will eventually disappear.

But your knowledge, reasoning, and intuition — amplified by your agentic exoskeletons — will be your greatest assets when tackling problem spaces that remain beyond the reach of models.

That’s why it’s becoming increasingly critical for people — students and knowledge workers alike — to recognize the need to nurture these skills throughout their lives.

And, like financial investment, there’s compound interest: the earlier you start, the greater the returns.

Thanks, Ghislain Bourgin! I didn't know about the Zettelkasten method - looks very interesting! This is one of these hidden gems that make a difference!

Like
Reply

To view or add a comment, sign in

More articles by Ghislain Bourgin

  • Go back to coding

    The "Micro-app builders", or non-tech "vibe coders" population, that I introduced previously, come from very different…

Others also viewed

Explore content categories