The Memory Layer Missing From Your Claude Setup
TL;DR: Part 1 covered the setup: CLAUDE.md files, folder hierarchy, and the Cowork system that tells Claude how to work. That's the operating system. This article covers the second layer, the one almost nobody builds: a persistent knowledge base that Claude maintains automatically across every session. After weeks running this system, 27 wiki pages, 10 competitor profiles, and a full competitive intelligence layer built with zero manual filing, here's exactly how it works.
Last week I covered the setup: CLAUDE.md files, folder hierarchy, the Cowork configuration that tells Claude how to work with your tools and your business.
The moment it was live, I noticed something.
Claude knew how to work. It had no memory of anything we'd actually figured out.
Two months of competitor research: gone after each session. Audience insights: not stored anywhere. Campaign experiment results: pasted in fresh from scratch every time. The system was organized. The knowledge was still scattered.
This is the gap nobody talks about, because most people stop at the setup.
Why Better Prompts Don't Solve This
The first instinct when Claude produces generic output is to write better prompts. Add more context. Be more specific about what you need.
That works for one session. It doesn't fix the underlying problem.
Every new Claude session starts with a blank slate. Whatever you figured out last month, whatever a competitor just announced, whatever experiment just failed: Claude doesn't know any of it unless you paste it in.
Pasting context into a prompt is not the same as accumulated knowledge.
Context is a one-time dump. Knowledge is structured, updated across sessions, and gets richer over time. You can drop 10 pages of raw notes into a prompt. You cannot drop six months of synthesized business intelligence into a prompt without losing the thread.
The gap is not prompt quality. The gap is that there is no persistent layer where knowledge lives between sessions.
The Architecture: 3 Layers
Andrej Karpathy described a pattern called the LLM Wiki: a persistent, compounding knowledge base maintained by the LLM itself. Not a RAG system. Not a vector database. Not a folder of PDFs.
The distinction matters.
RAG retrieves chunks of text when queried. An LLM-maintained wiki is synthesized, updated, and written by the model across sessions. The LLM is not just consuming knowledge; it's building it.
The architecture has three layers:
→ Layer 1: Raw sources. Meeting notes, documents, research, conversations. Disposable. They get consumed and fed into the wiki.
→ Layer 2: The wiki. LLM-maintained markdown files. Permanent. This is what actually matters. Structured, queryable, updated automatically after any session that produces real insights.
→ Layer 3: The schema. Governing rules that tell the LLM how to organize, update, and maintain the wiki. This lives in a WIKI.md file alongside your CLAUDE.md setup.
The raw sources get consumed and discarded. The wiki gets richer with every session. The schema keeps it consistent.
I built this architecture using Obsidian as the wiki viewer and Claude Cowork as the writer and maintainer.
Eight Categories, 27 Pages
The wiki has eight folders. Each one accumulates pages over time.
→ Positioning: Every positioning decision with the reasoning behind it. When you revisit a decision six months later, you know what you tried, what you rejected, and why.
→ Growth: SEO architecture decisions. Channel experiments with actual results. What the data showed, not what we hoped.
→ Content: Voice patterns that work on LinkedIn vs X. Hook formats that converted. Every performance insight feeds back into a living document. The patterns compound.
→ Cold outreach: Campaign results by angle, by ICP, by inbox health. What copy outperformed. What the reply-to-positive rate looked like and why it shifted.
→ Audiences: Buyer persona data. What objections come up on calls. What language prospects use when they describe the problem they're trying to solve.
→ Campaigns: Active briefs and post-campaign learnings together, not separated.
→ Operations: System architecture, tool decisions, anything a new person (or a new Claude session) needs to understand to work without re-asking.
Recommended by LinkedIn
The Seeding Session
The wiki does not populate itself. The first step is a dedicated seeding session.
The prompt I used, adapted from the Karpathy pattern:
You are maintaining a wiki for [PROJECT]. The wiki lives at [FOLDER PATH] and is governed by WIKI.md.
Your job is to extract knowledge from anything I give you and write it into the wiki. The wiki is permanent and accumulates across sessions. Raw source materials are disposable. The wiki is what matters.
FOLDER STRUCTURE
The wiki has 8 folders. Route every new entry to the right one:
competitors/ — one page per competitor: positioning, pricing, weaknesses, review data, trend direction
positioning/ — brand and product positioning decisions, differentiation reasoning, messaging frameworks
growth/ — SEO decisions, channel experiments with results, IA decisions, keyword data
content/ — voice patterns, hook performance, content angles that worked or failed
cold-outreach/ — campaign results by angle, ICP, copy, inbox health, reply rates
audiences/ — buyer personas, objection patterns, language prospects use, segment data
campaigns/ — active campaign briefs and post-campaign learnings together
operations/ — system architecture, tool decisions, workflow context
PAGE CREATION RULES
Create a new page when the topic is distinct enough to deserve its own entry and will be referenced again.
Update an existing page when new information adds to, corrects, or extends what's already there.
Never duplicate. If a page already covers the topic, update it rather than creating a new one.
WRITING RULES
Write every entry so that anyone reading it cold understands the full context without follow-up questions. Include: what the situation was, what was decided, why, and when.
Do not store raw research. Store what was concluded from it.
Do not store tasks or in-progress work. Store completed decisions and findings.
UPDATE PROTOCOL
After each session, tell me which pages you updated or created, and what the key addition was.
When you update an existing page, add a one-line note at the bottom: what changed and why.
If new information conflicts with something already in the wiki, flag the conflict before updating.
WHAT DOES NOT BELONG IN THE WIKI
Task lists. In-progress work. Raw research that has not been synthesized. Anything relevant for a single session only.
This is my setup, built around what I actually track and care for: competitors, positioning, growth experiments, content patterns, cold outreach data.
Note: my structure will update over time.
If you run a product team, your categories might be customer research, feature decisions, sprint retrospectives, and incident learnings.
If you're in sales, it's deal patterns, objection libraries, and win/loss data.
The folder structure and the routing rules in the prompt are the part you need to adapt. The rest of the architecture is the same regardless of what you build.
I ran a seeding sessionat first . Every research file, every competitor note, every audience doc, every positioning draft I had: fed into Claude and written into the wiki. The initial seeding is the hardest part of the whole system. After that, the wiki updates itself.
How The Self-Updating Loop Works
Every Claude session that produces real output ends the same way: Claude writes back into the wiki.
→ A positioning decision gets locked: positioning/ gets a new or updated entry. A campaign result comes in: cold-outreach/ gets updated with the numbers and the angle. A competitor changes their pricing: competitors/[name].md gets revised with a timestamp and a note on what changed.
No manual filing. No separate knowledge management task. Claude does the update as part of the session.
Here is what that looks like in practice. In April 2026, I ran a full information architecture audit on Zipchat 's website to understand why the /use-cases and /capabilities pages were generating zero organic traffic.
Before this system, a session like that produced a report. The report sat in a file, read once, referenced never.
After this system, the audit produced a new page under growth/. Every decision from that session is now permanent context. The three-layer IA structure we agreed on. The decision to kill /features and redirect it. The five keyword opportunities by volume and difficulty. The Cloudflare Error 1101 blocking the entire /use-cases subtree from being crawled.
Three months later, when we brief a developer or revisit the website roadmap: Claude opens that wiki page and starts from where we left off. No re-explaining the architecture. No re-researching the keywords. No re-deciding what we already decided.
That single session is now compounding forward.
What Most People Get Wrong
→ Treating the wiki like a document library. A wiki is not where research. It is where you store what you concluded from the research, structured for Claude to use without re-reading everything.
→ Building it reactively. If you only update the wiki when you remember to, it falls behind within two weeks. The loop has to be the default end state of every productive session, not an optional step.
→ Making the structure too flat. A single file called "business context" is not a wiki. The folder structure exists so that Claude can navigate to exactly the right knowledge without scanning everything.
→ Putting everything in. Some things do not belong in the wiki: task lists, in-progress work, raw research before it has been synthesized. The wiki holds what you've concluded, not everything you've touched.
→ Trying to build it without a schema. The WIKI.md file (the governing rules for how Claude maintains the wiki) is what keeps the structure consistent across sessions. Without it, Claude will interpret the format differently each time and the wiki degrades.
Where To Start
Block some time for the seeding session. Pull every document you have: competitor notes, audience research, positioning drafts, campaign results, anything you would paste into a prompt from scratch. Feed it all through Claude with the seeding prompt above.
You will end with 10 to 20 wiki pages. Not perfect. Not complete. Good enough to start the loop.
For the next three sessions, end each one by asking:
What did we learn or decide today that belongs in the wiki? Update the relevant pages.
After 30 days, the sessions start feeling different. The output stops being generic because the context behind every session is no longer generic.
The complete guide, including the full folder structure, the WIKI.md schema file, the seeding prompt, category templates, and how to wire everything to Claude, is the BRAIN guide.
This is the missing layer most “prompt engineering” setups never build: prompts are a one-time context dump, not compounding knowledge.
I also use the Karpathy wiki. it's brilliant. especially for my NanoClaw.
This is super useful, especially the part about the persistent knowledge base.
Brilliant! Been wondering about the knowledge base element for a while now.
Awesome stuff. By the way, if I have subagents - do they use same memory or have their own?