Creating a Second Brain with Claude Code

Creating a Second Brain with Claude Code

I've 2x’d my productivity as a VP of Product @mercury by creating a "Second Brain" using 5 years of work history, 15k docs with 3.5 million words, and every tool in my stack. It runs locally, is a core part of my every use of LLM, and gets better everyday.

Today, I want to share the stack, the workflow, and the prompt to build it:

Background

I am a VP of Product for @mercury, which is a long way of saying I'm in a lot of meetings, consuming a lot of content across different tools (linear, slack, notion, data analyses), and trying to make sure I actually get stuff done. Working at a company for 5 years and being an information addict, I am essentially a walking encyclopedia for Mercury post 2021-today -- but I've recently found that my scope + workload means I can't keep every plate spinning.

One day, I was scrolling X and came across a series of posts that caught my attention, starting with Tobias Lütke 's QMD. QMD is a local vector search, and then a few other posts started to show up that connected a few dots for me:

  • Claude Code launched hooks (per-event prompt injections)
  • GasTown / OpenClaw launched with the power of orchestrators writing memory + delegating to sub-agents (among many other patterns)
  • MCPs/CLIs hit a critical mass, and enough of my core tools were available without having to ask admins to give me API keys
  • Tyler Cowen did an interview and talked extensively about "writing for AI" in a way that struck a chord - how much output of work already exists that I'm not using?

I decided that it was time to build

Prep work (~1-2 hours end to end)

To start, I needed a library of all the content I could know about... so I downloaded every document I've ever created for my job at Mercury + any relevant product strategy, analysis, retro, reflection on execution, etc. This netted out to over 15k documents and 3.5 million words. Maybe I've read them all, but I've forgotten most. These became a folder that I just called "raw data", and I ran QMD to index this on my computer.

To see if this worked, I used Claude Code to ask about random memories and surprising insights from this knowledge base - the amount of delight/surprise I experienced in seeing how much more capable vector search was than text-based search gave me the confidence to keep going. I asked one questions about books that it would think I like, and it was spooky how good of recommendations it gave me. I think this is my best advice in this journey: test every step of the way! Easy to get caught in hill climbing a local maxima

Train my brain and connect it to my tools (~2 hours)

With all the raw data, I needed to help it make sense of me + what my goals are + the tools I used, so pursued three paths:

  1. Explain myself - to be able to create a second brain, it needed to know what mine was doing. I wrote up a me.md explaining who I am (work + life), gave it my goals + performance reviews for the last 5 years + set of personal priorities. The most humbling part was the system pointing out that I've been making the same strategic mistake for years, according to my own performance reviews, and was making it that week as I was setting up the system
  2. "Distill" the data - I spun up an agent team to use the me.md + the knowledge base to create a set of docs between me <> raw knowledge base. This idea largely came from the idea that LLMs regularly distill down smaller models to take tasks, and I had no idea if it would help me in this, but Agent Teams had just launched and so I had a swarm of them find the main "themes" we've worked on from the knowledge, give sourced histories of this, and summarize key lessons. These created a context.md folder
  3. Tools - I use a few tools (Google Docs, Linear, Notion, Metabase) , and luckily most have connectors on Claude Code or these companies are actively launching MCPs/CLIs. A few didn't, but I spun up specific skills that crafted direct API calls to be able to complete tasks like "run a query for XYZ".

Claude had access to all the information about me + the tools I used + had a massive library of all my work, but did it really know anything? Does anyone?

Wire it up (<1 hour)

At this point, I had so many words + documents that it was time to actually find use or abandon ship. But I didn't want to have to go search this every time and that's when "hooks" caught my attention.

Hooks from Claude Code let you insert content into your prompt without needing to ask (or when a session starts, after a tool use, or when a session stops). Using the UserPromptSubmit hook, I enabled my Claude Code to use qmd to find names + topics + specific documents related to my prompt.

This is a nerd-out moment, but when searching for files in Finder, it is mostly a name + raw text search.... but QMD can help bring context into searches. My system is tuned to figure out a query, then returns results using one of two techniques:

  1. vsearch (semantic/vector) — understands meaning of my question. "How's the funnel performing?" finds documents about conversion rates even if they don't say "funnel."
  2. BM25 (keyword) — exact term matching. Catches proper nouns, acronyms, specific metrics that semantic search might miss.

Very quickly after injecting context into prompts, I saw the quality of my results improving. My ability to bring lazy jargon and limited context, then have Claude enrich it with my "Second Brain" content showed me the power of the right context + tools going into every query, and I started to have weird things happen.... but more on that in a minute, because I had one more major step to unlock

Let it learn

GasTown and OpenClaw agents seemed to get better because they are consistently updating their memory (a written .md) file, so I started to wonder if I could learn this way too. I found that there are essentially three time frames in to self-reflect, increment new knowledge, and :

  1. Per session - I created a /learn skill that takes a conversation, looks at the task I was trying to complete, and then updates my .md files. This has been particularly useful for MCPs that regularly error out - once its been experienced once, the prompts get better to avoid the same errors.
  2. Per day/week - I use a morning chron job to spin up a daily brief of what's coming each day for me + any relevant context from my knowledge base, and then use these to automatically update my memory of what's progressing.
  3. End of Month - At the end of the month, I do an interview with my Claude Code on the state of the world: we start with what we were trying to do this month, how it actually went, what went great and poorly, and what we should do for next month.

What this looks like in practice

It is a strange experience to describe how you use a brain, or a second one for that matter, but I legitimately believe I've 2x'ed my productivity as a whole and I want to share some practical examples.

  • Recall speed of seconds - finding the needle in a haystack of my memory is now much better; my days (and most work tasks) start in a Claude Code sessions and for any task like writing a document, doing an analysis, or answering a question is now both faster and more comprehensive
  • No more meeting prep - my day starts with a summary of what's happening around me - meetings, linear updates, github pushes, slack messages I haven't responded to. Because all the context of the work, the plans, the notes from our 1:1s, etc are in one place, when I enter a 1:1, I am ready for any topics with 1 or 2 prompts about the upcoming meeting.
  • Never miss an action item - this is likely emergent from deep usage of this system, but I ask at the end of the day "is there anything I forgot to do today?" and it regularly finds the one or two interactions I forgot to close out. Cross tool synthesis is SO powerful
  • Realtime feedback - I've been working for ~15 years, and the most frequent feedback I've gotten from managers is bi-weekly or weekly at best. Because this has my performance reviews, it knows what feedback I'm getting from my managers and has called out me doing the same patterns I've gotten feedback on.

Here's what an actual conversation looks like:

Article content
the "hooks" are where the most value is - but sadly that's sensitive info and can't share here!

The proactive explorer mode

Because this system is so capable and knowledgable (+I'm so biased on my POV after 5 years in a job), I've started asking it regularly to think about the company priorities, all my knowledge and experience, and has access to all the tools I have, the Second Brain system is capable of doing autonomous research for how I can solve my problems.

At this point, it is no longer a cohesive narrative because I'm actively in this: I've started layering on other new functionality like Cron/Scheduled jobs, Agent Teams/Swarm, @karpathy's AutoResearch capabilities, Lenny Rachitsky 's interview archives... These flow into my daily briefs, sectioned-off parts of my memory, and become skills that are re-usable.

A note on data security, sensitivity, etc

There's a few things that are hard to fit nearly into this post, but I've thought a ton about how to make sure I'm not leaking private information to the world.

  • My company has a strict AI policy and this only uses approved tools - it is less fun to write about those parts of this, but I made sure that everything I was doing was covered by our company's internal policies + strong enterprise agreements and that no customer data showed up in any of these systems. This was super painful to make sure I got right, but so so important and it was worth it
  • I removed any truly sensitive documents from every entering this (but reality, most company internal comms is private but not PII)
  • I had local LLMs do a pass at the knowledge base to remove/obfusicate anything specific - no external emails, no addresses, etc.
  • I have a security hook to remove any PII (mine or incidentally collected through my day to day work) before submitting to Claude code
  • For any data analysis, I only give it aggregate data or have it return aggregate data

There's more to this, but critical step! Please make sure to follow all laws and regulations you have with your job

The Second Brain system (as drawn by itself)

Article content

A prompt you can use

I want you to help me build a "Second Brain" — a persistent knowledge system that runs in parallel to my work using Claude Code. We'll do this in 5 phases. Walk me through each one interactively. Don't skip ahead — confirm each phase is working before moving on.

## Phase 1: Interview Me & Create my Profile

Interview me to create a file at ~/Documents/second-brain/me.md that captures:
- Who I am (role, company, responsibilities)
- What I'm optimizing for (goals, priorities, what success looks like)
- My working style (tools I use daily, how I communicate, what frustrates me)
- My growth edges (feedback I've gotten, patterns I want to break)
- What I care about outside work (interests, values — helps with recommendations)

Ask me 5-7 questions conversationally. Don't make me fill out a template. Write the file when you have enough.

## Phase 2: Build the Knowledge Base

Help me collect and index my work history:
1. Ask me where my documents live (Google Docs, Notion exports, local files, etc.)
2. Help me export/download them into ~/Documents/second-brain/raw/
3. Install QMD (https://github.com/tobi/qmd) if not present: `bun install -g qmd`
4. Create a QMD collection from the raw folder: `qmd collection add Documents/second-brain/raw`
5. Index it: `qmd update`
6. **Test it together** — ask me for a few things I remember working on, then search the KB to see if it finds them. Try both `qmd search` (keyword) and `qmd vsearch` (semantic). If results are bad, we troubleshoot before moving on.

## Phase 3: Distill & Summarize

Using me.md + the knowledge base, create ~/Documents/second-brain/summaries/ with:
- strategic-context.md — what my company/team is trying to do and why
- role-context.md — my specific responsibilities and how I fit in
- historical-context.md — key decisions, pivots, lessons from my work history
- team-context.md — who I work with, dynamics, stakeholders
- personal-growth.md — patterns in my feedback, coaching themes

For each file, search the KB extensively (10+ queries mixing keyword and semantic search), cite specific source documents, and flag where you're inferring vs. quoting.

## Phase 4: Wire Up Automatic Context Injection

Create a Claude Code hook that enriches every prompt with relevant KB context.

Create ~/.claude/hooks/context-enrichment.sh that:
1. Extracts key terms and names from my prompt
2. Runs parallel searches (semantic + keyword) against the QMD collection
3. Returns the top results as context injected into the prompt
4. Completes in <2 seconds (kill searches that take longer)

Register it in ~/.claude/settings.local.json as a UserPromptSubmit hook.

The hook should output a <context> block with search results so Claude sees it but it doesn't clutter my conversation.

Test it: I'll type a lazy prompt about something in my KB and we'll see if the hook injects useful context.

## Phase 5: Create the Learning Loop

Set up three learning mechanisms:

### Per-session: /learn skill
Create ~/.claude/skills/learn/SKILL.md that:
- Reviews the conversation for mistakes, surprises, and validated approaches
- Updates ~/.claude/CLAUDE.md with new tool gotchas, workflow preferences, corrections
- Saves important context to memory files in ~/.claude/projects/-Users-{me}/memory/
- Bias toward brevity — only save what's genuinely new and useful for future sessions

Create a script at ~/.claude/scripts/morning-brief.sh that:
- Checks my calendar (if Google CLI is available)
- Searches the KB for context related to today's meetings
- Summarizes any updates from connected tools
- Outputs a brief I can read in 2 minutes

Help me set it up as a launchd job that runs at my preferred morning time.

### Per-month: Retro prompt
Create ~/.claude/skills/retro/SKILL.md that walks me through:
- What were we trying to accomplish this month?
- How did it actually go?
- What patterns are emerging (good and bad)?
- What should change next month?
- Update summaries/ with anything that's shifted.

---

## Rules for this whole process:
- Ask before installing anything
- Test each phase before moving to the next
- If something fails, don't retry — propose 2 alternatives
- Keep all files in ~/Documents/second-brain/ (KB) or ~/.claude/ (config)
- No over-engineering — functions over classes, scripts over frameworks
- Everything should be runnable, debuggable, and modifiable by me later

Start with Phase 1. Interview me.        


Tristan Riquelme

Revenue | Estrategia | Analytics

1w

Hi, I'm trying a similar but oversimplified approach and find out I'm running out of tokens all the time. My setup is Code + Routines + GDrive docs as persistent memory + connectors to drive, gmail, calendar, slack, salesforce ... any thoughts on what could I be getting wrong?

Like
Reply

Love this. Appreciate you sharing.

Like
Reply

Very cool. Can’t wait to experiment with this.

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore content categories