AI Powered Solutions - an Implementation Framework.
In a LinkedIn post, I argued that software development using AI needs a platform. As without shared standards, guardrails, and observability, software engineering teams will struggle to consistently get the benefits of using AI to develop solutions.
This holds for building solutions with AI, but what if you are developing and operating a solution that has AI as a core component?
Every organisation adopting AI powered (not just developed) solutions at scale needs the same things: a governed way to connect AI to institutional knowledge, shared orchestration capabilities, audit trails, and guardrails.
This article works through the layers that every non trivial AI solution needs and the allocation of the responsibility for the components within those layers into domain and platform responsibilites.
Systemically addressing each of the layers effectively provides a framework for the robust implementation and ongoing operation of AI powered solutions.
To ground the discussion it focuses on a concrete business: a mid-size wealth management firm (50–500 advisors)
The Five Layers of an AI Solution
Layer 1: The Knowledge Repository
Every serious AI deployment needs a way to ground the LLM in institutional knowledge rather than relying on the LLM's general training data. The infrastructure concerns (indexing, vectorisation, hybrid retrieval, permission inheritance, freshness management) are platform capabilities. They don't change whether the solution is indexing investment research or engineering runbooks.
What is domain-specific is the content itself. For a wealth management firm, that means research reports, investment policies, compliance manuals, client documentation, and model portfolio rationale. This is the firm's institutional memory.
Layer 2: The Code & Configuration Repository
AI powered solutions have come a long way from the days of prompts and ad-hoc vibe coding. This evolution has led to the realisation that skills are code, agent definitions are code and guardrail rules are code. They need version control, review processes, and audit history, with the same discipline applied to code that runs in any production system.
The Code & Configuration Repository layer provides this discipline and is almost entirely a platform capability. The tooling and processes (version control, PR review, change history, branch protection) are the same regardless of domain. What varies is the content being managed: a wealth management firm version-controls its investment philosophy skills, client communication standards, compliance escalation triggers, and disclosure language. An engineering team version-controls its coding standards and CI rules. The repository doesn't care.
Layer 3: LLM Integration & Orchestration
The LLM layer is the heart of an AI powered solution. This is where questions are asked and the workflows to generate an answer are orchestrated. This layer has the most interesting mix of platform and domain.
Modern LLMs like Claude handle much of the orchestration natively: given connections to firm systems and a multi-step task, the model will chain steps, branch based on intermediate results, and carry context across the whole interaction. The Agent SDK adds structured support for human-in-the-loop gates and management of those system connections. So the orchestration engine is largely provided by the LLM itself.
The platform's role in this layer is the governance wrapper around that engine: which models are available and who can call them, what gets logged, how costs are allocated, and the integration protocol (MCP) that standardises how the LLM connects to internal systems. Which systems are connected and how access is scoped remains a domain decision.
What's domain-specific are the skills, MCP connections, and workflows composed using the LLM's building blocks. In a wealth management context these domain-specific units of work could be:
These domain-specific units of work, portable, versioned, and centrally governed, run on the LLM's orchestration capabilities within the platform's governance wrapper. Composing them into agents is how AI powered solutions are delivered.
Recommended by LinkedIn
Layer 4: Audit, Observability & Governance
An AI powered solution makes decisions, retrieves data, and produces outputs on behalf of the organisation. Knowing what it did and why is not optional. This layer provides that visibility.
The platform capabilities here are structured logging, provenance chains, and dashboards: the infrastructure to make AI usage visible. Who asked what, what was retrieved, what was produced, what happened next. This is Agent Trace (an emerging open standard for session provenance) applied broadly, giving an organisation the ability to see what the AI was asked to do and what it actually did. The logging infrastructure, the trace format, the dashboard tooling: all platform.
What's domain-specific is what is watched for and how that data is used. In wealth management, this layer flips the usual narrative. Most organisations treat audit and governance as the cost of adopting AI. In wealth management, the opposite is true.
Consider the status quo. An advisor speaks with a client, forms a recommendation based on experience, a morning research note, and a conversation with a colleague. The advisor documents it, partially, in the CRM after the fact. The compliance team reviews a sample. If a regulator asks "why was this recommendation made?", reconstructing the reasoning means interviewing people and piecing together fragments.
Now consider the AI-assisted version. The advisor asks the system to assess a client's position. It retrieves the relevant portfolio data, pulls the firm's current house view, checks against compliance rules, and drafts a recommendation. Every step is logged: what was retrieved, what reasoning was applied, what was produced, and what the advisor did with it. The provenance chain is complete and automatic. No one had to remember to document it.
This isn't a compliance cost. It's a compliance upgrade: a complete, searchable, auditable record of the reasoning behind advice. Not just the outcome, but the inputs, the logic, and the guardrails that were applied. This argument generalises. Any organisation where decisions matter (legal, healthcare, insurance, public sector) gains the same advantage. AI-with-governance beats no-AI-and-patchy-records.
The dashboard should serve three audiences: compliance (did the system stay within guardrails?), leadership (adoption, cost, value delivered), and practitioners themselves (what did I ask, what did I get, can I refine my approach?).
Layer 5: Guardrails, Safety & Access Control
Visibility is one thing; control is another. This layer ensures the AI solution operates within defined boundaries, doing what it should and refusing what it shouldn't.
The enforcement machinery (system-level constraints, policy engines, role-based access, identity management) is platform. It works the same way whether the rule is "mandatory tests before merge" or "human approval before client-facing output." The platform provides the ability to define, enforce, version, and audit rules. What's domain-specific are the rules themselves.
In wealth management, those domain rules matter. "No specific buy/sell recommendation is delivered to a client without a human approval step" can be a documented, tested and auditable rule; not a policy that relies on people remembering to follow it. The same applies to data boundaries (advisor A cannot query client data belonging to advisor B's clients), suitability checks, and disclosure requirements.
Guardrails also become of increased value. In a traditional advice process the guardrails are training, culture, and periodic review. All these are valuable but are fallible and hard to evidence. An AI solution can enforce rules consistently and prove that it did so. This is the reframe for sceptical stakeholders: the question isn't "how do we govern AI?" but "why wouldn't we want a system that governs itself more reliably than our current processes do?"
Bringing It Together
The five layers aren't a technology shopping list. They're a framework for thinking about what any AI powered solution needs to address to be robust, governed, and operable at scale.
Three things stand out when you work through them.
First, the platform/domain split is important. The knowledge infrastructure, the configuration management, the governance wrapper around the LLM, the logging, and the enforcement machinery are common across businesses and solutions within a business. The content indexed, the skills composed, and the rules codified change with the business and/or solution. An organisation that recognises this distinction builds (or sources) the platform once and invests its domain expertise where it matters most: in the skills and workflows that deliver value.
Second, governance is a feature, not a tax. The audit and guardrail layers don't slow adoption down. In regulated industries especially, they provide a quality of evidence and consistency that pre-AI processes never achieved. The firm that can answer "how did your AI reach this recommendation?" is in a stronger position than the firm that can't answer the same question about its human advisors.
Third, someone needs to own the platform. In a technology organisation this is the platform team. In a non-technology firm it's more likely a small cross-functional group spanning IT, compliance, and operations. The role is the same: maintain the shared infrastructure, curate the skills and connections, govern model selection and cost, and partner with domain teams to turn real workflows into standardised, governed capabilities. In a regulated business, compliance has a seat at the table from day one, not as a reviewer at the end but as a co-author.
The firms that get the benefits of AI powered solutions won't be the ones with the best prompts. They'll be the ones that take a structured approach to building those solutions ensuring that all the layers are addressed.
This is excellent work Mark, a really good framework and as someone working in a company representing your example, very helpful!
Nailed it! Brilliant write up as usual Mark.