It’s been almost a year since we started our experience management journey at Lloyds Banking Group; it’s becoming our design system for CX. We’re about to scale it, so I thought I would reflect on what we’ve learnt over these last 12 months. 1. Your experience hierarchy and journey framework are the backbone of your system. It is the shell that structures experiences at different levels across customer types, products, and channels. You won’t see results until everyone can embrace it. 2. Your hierarchy and framework must work on paper, on a digital whiteboard, and in sophisticated software. There cannot be barriers to entry. 3. This system turns journeys into data products that require structured inputs (like OKRs, analytics, quant and qual research), and structured outputs (like opportunities, propositional bets, and solutions). 4. This, in turn, invites your whole company to align on how you structure and classify metrics, research, opportunities, and solutions cohesively. This is a hard task at enterprise level. 5. This system isn’t a design thing or a CX thing; it’s a real-time outside-in view of how your business is serving customers. It only sticks if product, design, engineering, marketing, operations, etc., all embrace it. This takes a hell of a lot of storytelling and pitching. 6. You can see and feel results such as reduction of siloes and duplication, more efficient delegation of backlog items, and faster design-to-delivery cycles after (approx) 10 end-to-end journeys go live. The language and way of working becomes a domino effect across the organisation, at all levels. 7. This opens the door to conversations about journey-centric operating models— what would that look like, and what would it take? 8. Like a design system, it needs a governance model (and a core team) to create, maintain, and remove components.
Iterative Design for Enterprise Systems
Explore top LinkedIn content from expert professionals.
Summary
Iterative design for enterprise systems means building large-scale business platforms through repeated cycles of testing, feedback, and improvement rather than aiming for perfection in a single launch. This approach helps companies adapt their systems to real-world needs and evolving challenges, making updates and changes based on actual experience.
- Start small: Build the initial system to address current requirements, then plan for future expansion and improvements as users and business needs evolve.
- Measure progress: Define clear metrics and success criteria early on, so you can track performance and make data-driven decisions about what to improve next.
- Adopt gradual change: Roll out updates in phases and layer new features carefully, allowing your team to learn from each stage and reducing disruptions across the organization.
-
-
Systems design needs both short and long feedback loops. I work with teams that are often asked to grow or pivot a product to create more impact. Over the years, I’ve seen two main types of leaders trying to make progress in their products. The first type thinks very short term. They design page by page and try to keep the problem small. Sometimes this comes from not being able to sell a bigger vision, or from holding too tightly to release cycles. While this approach can create quick wins, it often comes at the cost of a bigger product vision. Great products can’t be built block by block, as great experiences are connected, and each part adds value to the whole (despite what some enterprise companies still think). I find this approach painful. And slow. The second type wants to solve the whole experience at once. They might copy a competitor or follow a logical sequence of steps. But most platforms don’t map one-to-one, so they end up backtracking to fix gaps. Iteration turns into rework, and learning is lost. Sometimes this works, and brute force is needed. You see this emerging with AI prototyping. Both approaches fall short. One misses how small decisions shape the system, and the other skips over details as it charges ahead. Modern software needs both views. And AI won’t fix these gaps… it will only make them worse if the right mindset isn’t there. That’s why I push for a third approach: progressive design. It tests assumptions at the screen level while shaping the bigger experience. Short, iterative cycles build the story. Adding signals from UX metrics and audiences makes it stronger. The beauty is that you can test incrementally with attitudinal metrics while also building a behavioral profile through actions. A recent customer used this approach with us using Helio on a complete homepage redesign and saw: +28% engagement on target KPIs +39% lift in positive impressions vs. baseline Curious what working styles have you seen work? #productdesign #uxmetrics #productdiscovery #uxresearch
-
Most design systems don’t fail because they’re poorly designed. They fail because they try to change everything at once. I’ve seen this across enterprise teams again and again. A system is introduced to unify experiences overnight. Instead of accelerating teams, it slows them down. Adoption stalls. Workarounds appear. Consistency turns into friction. The problem isn’t the system. It’s the approach. At SAP, we’ve learned that design systems don’t scale through disruption. They scale through progression. That’s why our system is built as a 4-layer model, enabling phased adoption instead of a full reset. It starts with design language. Colors, typography, and iconography that build trust across every touchpoint. Then UI components. Reusable building blocks. Today, 80–90% of SAP products are already aligned here. Next, design patterns. Proven solutions with real, reusable code that reduce reinvention. And finally, floor plans. End-to-end layouts that orchestrate complete workflows. This approach recognizes the realities of enterprise complexity while ensuring a cohesive experience across SAP's vast portfolio. The goal isn’t perfection on day one. It’s momentum that compounds over time. So before you scale your system across every product, ask yourself: Are you enforcing consistency… or designing for adoption? #SAPDesign #SAP #ArinBhowmick
-
A client of mine was recently acquired by the market leader in their space. The data infrastructure work we did over two years is a critical component of their entire product. But we didn’t build the perfect architecture on day one. We built something that worked for their immediate needs, then systematically improved it over the years as those needs evolved. Here’s a simple breakdown of the iterations we did throughout this project: Round 1: We refactored their original single-customer pipeline to support multiple customers. Sounds obvious in retrospect, but when you're closing deals and need to ship fast, you build for the customer in front of you. Round 2: We enabled their pipelines to handle specialized use cases per customer. Some wanted the full suite of analysis. Others only needed a select portion. The initial design assumed everyone would want everything. Round 3: We implemented generic components and handlers to support out-of-the-box integrations with different source systems. This cut time-to-go-live started dropping significantly. Before this, every new integration was custom work. Round 4: Implemented a full integration test suite with customer-specific test cases. Stable releases are critical when your data platform powers your core product. These phases weren't planned ahead of time. They were responses to real constraints we hit as the product scaled. Each iteration solved a bottleneck that was actively limiting growth. If you're building something from scratch, you can bake some of this thinking into your original design. But that's a luxury most product teams don't have, especially when deals are large and there are only a handful of high-touch integrations that really provide value. My advice is to plan for iteration instead of trying to build the perfect architecture on day one. You can’t predict every possible future state, and often, the best platform design emerges from solving real problems as they appear.
-
Those multi-agent workflow diagrams I share - They look great in infographics - they are good ideas. But, they'll fail in production. Here's what you don't see in the nice diagrams: ➛ the months of iteration, ➛ the failure modes, ➛ the long debugging sessions, ➛ the cost complexity, ➛ the integration hazards, Each arrow represents a decision point that needs guardrails. Every agent handoff is a potential failure waiting to happen. Every component is a cost trade-off you'll need to justify. When you see those beautiful infographics with 3-4 agents working in perfect harmony, you're not seeing: ➛ The evaluation framework validating each agent's output ➛ The fallback logic when agents fail or hallucinate ➛ The prompt engineering keeping agents sane ➛ The state management preventing data loss ➛ The compounding latency of LLM calls ➛ The debugging nightmare in prod Before you architect that impressive multi-agent system, answer these questions: What does "good" look like at each step? How will you measure if it's actually working? What's your acceptable failure rate? How will you debug when (not if) something breaks? Here's the approach that's worked for enterprises I've worked with: ➛ 𝐒𝐭𝐚𝐫𝐭 𝐛𝐚𝐜𝐤𝐰𝐚𝐫𝐝𝐬. 𝐃𝐞𝐟𝐢𝐧𝐞 𝐬𝐮𝐜𝐜𝐞𝐬𝐬 𝐟𝐢𝐫𝐬𝐭. Before you write a single line of code, build your evaluation dataset. What are the edge cases? What does "correct" look like? How will you know if Agent A handed off clean data to Agent B? ➛ 𝐓𝐡𝐞𝐧 𝐰𝐨𝐫𝐤 𝐢𝐧 𝐥𝐚𝐲𝐞𝐫𝐬: 𝐋𝐚𝐲𝐞𝐫 𝟏 (Define Metrics): Prove a single, well-prompted agent can handle the task reliably. Get your evaluation harness working. Establish your baseline metrics. 𝐋𝐚𝐲𝐞𝐫 𝟐 (Learn from data): Only add complexity - multiple agents, orchestration, handoffs - when you have data proving it improves on your baseline. Each new component should solve a measured problem. 𝐋𝐚𝐲𝐞𝐫 𝟑 (Build tracing): Build observability into every handoff. Make the system debuggable. Plan for failure modes before they happen. Evaluation-first, ➛complexity only when justified by data, ➛observable at every step. The most elegant solution isn't the one with the most agents - it's the one that reliably solves your problem in production. 𝐌𝐨𝐬𝐭 𝐢𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭, 𝐫𝐞𝐦𝐞𝐦𝐛𝐞𝐫, 𝐲𝐨𝐮 𝐝𝐨𝐧'𝐭 𝐧𝐞𝐞𝐝 𝐀𝐈 𝐭𝐨 𝐬𝐨𝐥𝐯𝐞 𝐚𝐥𝐥 𝐩𝐫𝐨𝐛𝐥𝐞𝐦𝐬. Sometimes the best workflow is the one you don't build. Also, may be I'll look like this after I shed 10 kilos. 😀 ♻️ Repost if you found this useful. ➕ Follow me Sandipan for more insights on AI #aiinwork #agenticAI #agentbuild #futureofwork #reliableai
-
LangChain defines a new paradigm: Agent Engineering. However, iterative development isn't a shiny new toy: it's a proven discipline forged in the use of life-critical systems. When IBM Federal Systems Division built the Trident submarine's command software in the early 1970s, they faced an impossible constraint: perfect requirements on day one were fiction. Instead, they shipped 17 iterations over 20 months, each one integrated, tested, and refined. The breakthrough wasn't the iteration itself. It was making observability the centerpiece. Every cycle produced measurable feedback that directly shaped the next. NASA's approach mirrored this philosophy. Project Mercury teams ran half-day iterations with rigorous technical reviews, while the Space Shuttle's primary avionics software emerged from 17 iterations across 31 months using eight week time-boxes. These weren't experimental sprints; they were disciplined cycles where observation drove refinement, and the data stayed at the center of every decision. The defense and aerospace sectors understood something critical: non-deterministic systems require continuous observation to become reliable. They couldn't wait for perfect specifications because the environment changed too quickly. Their solution was architectural: build something testable, instrument it, observe the results, and let those observations guide the next increment. In agent engineering, this historical pattern resurfaces intact: build, test, ship, observe, refine, repeat. LLM systems are inherently non-deterministic, making them structurally similar to the problems aerospace engineers solved decades ago. Observability transforms abstract behavior into concrete signal, and signal becomes the mechanism for moving from fragile to reliable. LangChain is on to something but it isn't new. #AGI #AI #LLM #LangChain #AgentEngineering #technology
-
This was one of the most challenging and rewarding projects I’ve ever worked on at Salesforce. It was an AI-powered Slackbot for enterprise cybersecurity at Salesforce, called Ask-IAM. When we first launched the MVP, I was so confident it would blow customers away. But within weeks, user feedback started flooding in, pointing out glaring gaps we hadn’t anticipated. It was humbling, but it forced us into a constant cycle of iteration. For one week, we were refining the natural language processing (NLP) to better understand user queries. Next, we adjusted the bot’s tone to make it feel less robotic and more approachable. It was a rollercoaster, but every tweak made the product better. The takeaway was that success doesn’t come from getting it right the first time; it comes from how fast and effectively you can respond when you don’t. This iterative mindset has stuck with me since then. In AI Product Management, iterative development is the name of the game. Unlike traditional software, AI products evolve rapidly based on continuous data input, requiring constant tweaks. Being a master of iterative development isn’t optional; it’s essential. Here’s how you can master this skill as a superpower: 1. Adopt Agile Frameworks: Learn agile methodologies but tailor them for AI workflows. Understand what “sprints” mean to retrain models, data refinement, and experimentation cycles. 2. Embrace Failure: AI thrives on experimentation. Cultivate a mindset where failed experiments are opportunities to gain insights and improve. Track and document these iterations to build a knowledge base. 3. Collaborate Across Teams: Iterative AI development demands collaboration between PMs, data scientists, and engineers. Sharpen your cross-functional communication skills to lead and align teams during rapid iteration cycles. NavHub AI and APM Club (NavHub AI’s proud community partner!) can help you gain an advantage in learning this skill: 👉 AI-Powered Iteration Practice: Participate in mock project sprints via NavHub AI that simulate real-world AI product development iterations. 👉 Dynamic Feedback Loops: Leverage our mentorship pairing feature to get constant feedback from experienced AI PMs and data scientists on your project iterations. 👉 Live AI Challenge Events: Join hackathons organized by APM Club, designed to mimic high-pressure and iterative AI product development cycles. Iteration isn’t just about doing things fast; it’s about doing them right, with agility and precision. Join our Pilot Program now to turn your skillset into your competitive edge: http://tiny.cc/of15001 #artificialintelligence #upskill #data #productmanagement #communication
-
I'll often talk with clients or prospects who are looking for an internal tool or business app that is robust in every aspect, upon launch, and that is deeply integrated with the business's other existing softwares and processes from the get-go. But the more time I spend building these apps (think client portals, etc), the more I'm convinced that the concept of building and launching an "MVP" isn't just for SaaS products launching on the market. It applies to internal tools as well 👇🏼 📌 Iterative development is crucial for internal tools because it's really difficult to create a solution that's fully ideal right off the bat, in relation to all the other existing systems and processes that the new tool will touch. To make things more complex, the existing systems and processes in the business might be changing during the development process. So you'll constantly be juggling evolving requirements for the new tool 😅 Internal projects with expansive scope usually do three things: (1) They increase the amount of unforeseen variables (2) They compound the development ramifications involved in making the tool connected with existing business systems and processes (3) They make it harder to deliver on your estimates. I'm gonna call an MVP approach for internal tools... an MVT (minimum viable tool LOL). Less complex, better starting point. Focus on solving the core problem, and worry about deeper ties with existing systems later on. And fellow consultants... DON'T BE AFRAID to aggressively advocate for an MVT approach to your clients and prospects (if the project at hand warrants it). Persistence in this area is critical so you don't end up with a heap of a puzzle, trying to put together pieces that don't fit 🧩 #nocode #software #operations #mvp #saas #joelleelinked
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development