Institutional Memory Was Never Institutional
In Article 1, I said the model matters less than the data underneath it. Here's what I meant.
Most leaders I've heard from since publishing The 7% Problem two weeks ago had a version of the same reaction. Great. We have all this data. What do we do with it? They mean the CRM, the donor history, the grant records, the spreadsheets nobody else can open.
The reflex is right that the data matter. It's wrong about which data, and it's wrong about what counts as the asset.
Every nonprofit leader I know has also talked about institutional memory. It's the thing the long-tenured staff carry around. The judgment that survives a hard board meeting because someone in the room sat through the last three. What we mean when we say a person is irreplaceable.
Notice where it's lived. Not in the institution. In heads, rolodexes, notebooks, the parking-lot conversation after the meeting, the program officer who knows which donor will pick up the phone on a Tuesday. Always data. Almost never reachable by the institution that owned it.
And memory is biased. What people choose to remember isn't always factual, and the gap between the meeting that happened and the meeting people remember widens with every retelling. The executive team builds narratives on those memories: why a program failed, why a partnership stalled, why a hire didn't work. Those narratives ossify into self-fulfilling prophecies. The wrong account becomes canonical. Decisions get made against it for years.
Decision context is the captured-and-connected record of how an organization actually thinks, decides, and learns. Historically, what got captured was the structured record: the budget, the dashboard, the board minutes. Institutional memory was everything beyond it that explained why the decision actually got made.
For most of organizational history, those have been two barely overlapping circles. AI is the first technological wave that can pull them into one. Two attitudes have to shift first. The organization has to treat not acting on its data as itself a strategic choice, the same kind it makes when it commits to a program or sets a budget. And it has to treat unstructured data as valuable, sometimes more so than the structured. Neither attitude is the default.
The five layers
It helps to see the data underneath an organization as five layers, not one.
One note on what this is and isn't. This isn't a story about the nonprofit sector being uniquely or hopelessly behind. The most-resourced for-profits have been at this longer and spent more money on it; even so, it's hard to imagine they've solved it for good. Below that top tier, medium and even large for-profits are likely running their CRM, producing quarterly reports, and calling that data strategy. What makes the nonprofit version specific is what's at stake. This is the sector with the widest gap between what the mission obligates and what the margin allows, and the work that compounds capability has to happen inside that gap.
AI didn't just add the fifth layer. It's how the captured becomes queryable, the unretained retainable, the unproduced visible. The value isn't in any one layer. It lives in the intersection: across the layers themselves, and across the three levels of the organization where they show up. That's why everything that follows keeps coming back to it.
The three levels
Each layer shows up differently at three levels of the organization.
Organizational is where strategy gets set: what the organization commits to remembering, and what it spends to keep. Team is where the actual work happens, and where AI use either becomes visible across people or doesn't. Individual is where every staff member now builds a small version of this problem every day, in their own sessions with their own tools.
The intersections are real, but the choices that compound with time are narrower than the full matrix. One question at each level is where the executive team's leverage lives.
At the organizational level: what signals about your mission are you choosing not to collect, and what will that cost you in five years? The reason a member didn't renew, in the member's own words. The thing the board keeps coming back to but the strategic plan doesn't reflect. What program participants tell the program officer in the parking lot, after the survey. These exist as possibilities, not as records, and choosing not to collect them is itself a strategic choice.
At the team level: how does your team make its collective AI use visible to itself, without surveilling the individuals inside it? The answer isn't transparency for its own sake. It's architecture that surfaces the meta layer of the team's AI use: what questions get asked, what answers land, what gets forgotten, where bias creeps in. Without extracting any of it from the people inside it.
Recommended by LinkedIn
At the individual level: when your staff build context and memory using AI to do the organization's work, what are they entitled to keep when they leave, and what do you owe them while they're there? This is the same shape as the IP question consultants and organizations have been negotiating for decades: who owns what the consultant builds while paid by the organization. The answer used to apply to a small number of people. Now it applies to every staff member with an AI tool open. The staff member's context file is either treated as personal, in which case institutional knowledge walks every time someone moves on. Or it's treated as organizational, leaving the staff member no protected space for the thinking the work requires. Neither is right. The answer has to be negotiated deliberately. And it has to live somewhere portable, outside any single tool, or you've handed the negotiation to the vendor.
Most organizations aren't here yet, and not because their executives are inattentive. Strategy is hard. Implementation is harder. AI gets treated as a monolith that does, thinks, and decides, rather than the nearly endless à la carte menu of capabilities it actually is. So leaders ask whether the organization should be "using AI" and stop there. The deeper questions get pushed below the executive table to the head of IT, and what comes back is a governance document: fear-based, not strategic. Layer that on day jobs, on doing more with less, on incentives still oriented to the way things used to be done. Inertia wins.
The convergence paradox
Something odd happens when an organization runs AI at scale across team work without architecture for it.
Individuals become more confident. They're using a tool engineered to agree with them: a Stanford study in Science (March 2026) found AI systems endorsed users' positions roughly 49 percent more often than humans did. Each member of a team has spent time alone with their AI first, settling into a slightly different position because each session had slightly different inputs.
At the same time, large language models are probabilistic. Trained on aggregated human output, they weight responses toward what's statistically most common. Each response is a pull toward the model's center of gravity. The individual ends up on a position mean-reverted within their slice of the model. It feels original. It's less original than it feels.
Apparent divergence, actual narrowing, in the same conversation. The team is the only level at which AI-anchored positions become visible to more than one person while they're still malleable. Without team-level architecture, individual conviction hardens in isolation and the organization narrows silently. With it, the team sees its own tilt before it becomes the organization's.
What people take with them
Eighteen months from now, an organization that's made these choices deliberately looks different from one that hasn't. The reasoning behind decisions can survive staff turnover, not just the outcomes. Drift becomes visible at the team level, where it can be corrected, instead of surfacing later as a mystery at the executive level. The team's aggregate thinking stays distinctive, because there's architecture catching the convergence before it shows up at the top.
None of this is a model question. Models, tools, agents, frameworks are going to keep churning faster than any organization can chase them. The question isn't which to pick. It's how to layer them, and what the organization is building underneath them, so that when the next wave arrives the team already knows what to ask of it. The sand is going to keep shifting. The stability has to live in the decision context.
What stays with the person is the discernment. The judgment built through experience. The pattern recognition that took years to develop. That's always theirs, and nobody can take it. What's negotiable is everything they've built inside an AI tool while doing the organization's work, and that's the negotiation no organization I know has had yet. Have it deliberately, or the vendor will have it for you.
What stays with the institution is the context. The captured record of how the organization thinks. The connected thread of how this decision relates to that one. A record finally checkable against itself.
Build the discernment. Share the context. That's the asset.
Three questions, in your own words, to bring to your next planning conversation:
The first two are inside the matrix this article walked. The third is fodder for another article.
This kind of redesign isn't a project manager's job. It's an executive's.
That's the conversation I want to have next.
Really neat, Mark. My sense is that you are envisioning almost every capturable input (from meetings, spreadsheets, etc.) as potential data in this system, but I'm curious what specific data (or content, products) you might be interested in as a baseline. What specific kinds of "knowledge" could be mined for guidance?
Dr. Tiffany G.