Most digital teams don’t struggle because of the method - They struggle because governance doesn’t match the method. Rule 3 – Align Governance With Methodology A transformation can run on Agile, Waterfall, or Hybrid. But each model needs its own governance layer. When the method and the structure don’t align, delays and confusion show up instantly. Here’s what alignment looks like in real delivery: 📘 Agile – fast cycles, living documentation | Agile governance evolves sprint by sprint • Documentation updated every sprint, • Decisions captured directly in Jira or Confluence, • Ownership reinforced in retro logs, • Visibility shared across squads, Agile fails when teams try to apply Waterfall approvals to Agile sprints. 📗 Waterfall – gates, approvals, predictability | Waterfall governance relies on structured checkpoints. • Document milestones and validation gates, • Keep a defined approval chain, • Link ownership to each deliverable, • Validate scope before progression, Waterfall fails when decisions move informally without documented trace. 📙 Hybrid – both, but structured | Hybrid blends the speed of Agile with the clarity of Waterfall. • Sprint cadence for momentum, • Monthly governance gates for alignment, • One single governance hub for decisions, RACI, risks, changes, Hybrid fails when each team runs its own rules without a central structure. Governance is not about choosing a method - It’s about structuring the method you choose ! When governance matches delivery, teams stop fighting the system and start delivering clarity. 💬 What’s the biggest governance challenge you face in Agile, Waterfall, or Hybrid?
Process Governance Models
Explore top LinkedIn content from expert professionals.
Summary
Process governance models are frameworks that guide how organizations structure, oversee, and manage their workflows, ensuring clear accountability, compliance, and alignment with organizational goals. These models help teams choose the right rules and controls based on their methodology or technology, such as Agile, Waterfall, or AI systems.
- Match governance: Align your governance structure with the specific delivery method—such as Agile, Waterfall, or Hybrid—to avoid confusion and delays.
- Sequence controls: Layer different governance arrangements, like ethics, data management, and risk, to build a strong foundation before executing projects.
- Adapt oversight: Adjust governance requirements for emerging technologies like AI or machine learning to address unique risks and changing regulatory demands.
-
-
🧩 I’ve heard the objection more than once: “ISO42001 is just a management system standard. It’s not governance.” I do get it and appreciate the pushback. The structure mirrors other ISO standards: context, leadership, planning, performance, and that makes it easy to assume it’s focused on operational control. But if you stop there, you miss the bigger picture. ➡️What does “Governance” mean? ISACA’s #COBIT defines governance through the lens of the processes of Evaluate, Direct, and Monitor. This isn’t the same as managing daily activities. Governance is about setting purpose, overseeing accountability, and ensuring the organization stays aligned with stakeholder expectations. Here’s the key: #ISO42001 requires all three of those governance functions. 🔸Evaluate – Clause 4 requires organizations to understand external pressures, stakeholder concerns, and the purpose of the AI systems they use. Clause 6 builds on this with AI-specific risk assessment and AI system impact assessments. These aren’t simple check-the-box activities, they’re structured mechanisms to evaluate implications before action. 🔸Direct – Clause 5 puts top management on the hook to establish AI policies, assign roles, and make sure AI initiatives align with organizational objectives. This is how strategic intent gets defined and reinforced. 🔸Monitor – Clause 9 introduces internal audits, performance evaluation, and management reviews. Clause 10 brings in continual improvement and corrective action. This isn’t “set it and forget it.” These are the feedback loops that keep the governance system responsive. But yes, there’s clearly also management. And ISO42001 is very explicit in what it expects on that front. 🔹Management activities show up across Clauses 6, 7, and 8: 6.1 requires planning for AI-specific risks and opportunities, not just identifying them but taking action and integrating those actions into the system. 🔹7.2–7.4 cover resourcing, competence, awareness, and communication. These are core management responsibilities to support operational execution. 🔹8.1–8.4 go deeper into operational control of AI systems, requiring lifecycle planning, system-specific risk treatments, and validation of AI system impact. These are management-level processes that carry out the strategy, policy, and oversight defined at the governance layer. So no, despite its name, ISO42001 is not just a management system standard. It is a governance system that includes and directs management activities. It's a value creation tool. If you’ve worked with COBIT before, you’ll recognize the pattern. Evaluate, Direct, and Monitor sit at the top, while APO, BAI, DSS, and MEA processes carry out and sustain the system underneath. The structure is deliberate. Governance drives management. Management executes governance. When we understand both layers, we stop looking at ISO42001 as just an operational tool and start recognizing it as the system of record for AI oversight.
-
AI governance sounds boring until your model halts production. Or leaks customer data. Or makes a biased hiring decision. We built AI governance from scratch last year. Here's the framework that keeps us compliant, ethical, and fast. The AI Governance Pyramid. Five layers. Most teams skip straight to the top. That's why their AI implementations fail audits, break trust, or get shut down. Layer 1 (Foundation): Ethics & Principles. This is your "why we use AI" layer. Define your red lines before you build anything. What won't you automate? What decisions require humans? What bias are you willing to tolerate (spoiler: none)? We documented ours in a 2-page ethics charter. Every AI project gets measured against it. If it violates the charter, we don't build it. No exceptions. Layer 2: Data Governance. AI is only as good as your data. And your data is probably a mess. Where does it come from? Who owns it? How long do you keep it? What can't you use? We created a data classification system. Public. Internal. Confidential. Restricted. Each AI model gets assigned a data tier. If you need restricted data, you need executive approval. Layer 3: Risk & Compliance. This is where legal and security teams get involved. What regulations apply? GDPR? CCPA? Industry-specific rules? What happens if the AI makes a wrong decision? We run a risk assessment on every AI project. Low risk = fast approval. High risk = board review. Most teams skip this layer. Then spend months fixing compliance issues after launch. Layer 4: Operational Standards. How do you actually build and deploy AI safely? Model testing protocols. Version control. Access permissions. Monitoring and alerts. We created AI deployment checklists. No model goes live without passing every checkpoint. This layer is boring. It's also what prevents disasters. Layer 5 (Peak): Execution & Innovation. This is where most teams start. "Let's build a chatbot." "Let's automate this workflow." But without the four layers underneath, you're building on sand. When you have the foundation, execution is fast. You know what's allowed. You know how to build safely. You know how to scale without breaking things. Here's what we learned. Most AI failures aren't technical failures. They're governance failures. Someone skipped a layer. Someone didn't document data sources. Someone didn't assess risk. The pyramid looks slow. It's actually what lets you move fast without breaking everything. Which layer does your org skip? Found this helpful? Follow Arturo Ferreira and repost ♻️
-
Does governing traditional software require the same controls as governing machine‑learning models? Governance for traditional software and #machinelearning (ML) models differs because of their core principles: traditional software is deterministic, while ML models are probabilistic. As a result, governance requirements vary in areas such as validation, risk management, lifecycle control, explainability, human oversight, and change control. Traditional software relies on fixed validation and explicit procedural oversight. In contrast, ML governance requires ongoing validation, monitoring for performance and data drift, formal explainability, and structured human oversight to address emerging risks and uncertainties. An effective governance framework combines both approaches to address the challenges posed by deterministic software and evolving ML systems. It includes five layers: foundational governance for all systems, software governance for deterministic software, ML governance for ML models, integrated controls for hybrid systems, and continuous assurance for ML performance monitoring and regulatory compliance. https://lnkd.in/etph_TfH
-
📄 New paper: Orchestrating and Designing Data Collaboratives: What Governance Model is Fit for Purpose? I get asked this a lot: 👉 What’s the difference between a data trust, a data union, a data commons…? 👉 And more importantly—when should you use which? Too often, these models are treated as competing “solutions.” But that framing misses the point. In reality, they reflect different governance logics—and each is designed to solve a specific coordination, agency, or collective action problem in data ecosystems. For instance: Data intermediaries → reduce transaction costs Data unions → rebalance power Data trusts → address legitimacy deficits Data commons → enable collective governance Data cooperatives → redistribute ownership and agency Data sandboxes → manage uncertainty Data spaces → enable scaling and interoperability So the real question is not: ❌ Which model is best? But rather: ✅ Which model is fit for purpose—given the problem you are trying to solve? That’s why I wrote this short paper. It proposes a purpose-driven typology and argues for moving beyond “institutional choice” toward institutional orchestration—where multiple models coexist and evolve within the same ecosystem. 👉 Because in practice, mature data ecosystems don’t rely on a single model—they layer and sequence governance arrangements over time. (And that’s where strategic data stewardship becomes essential.) 📖 Read the paper here: https://lnkd.in/eyT9e4gV 🤔 Curious how others are navigating this: What governance model have you seen work—and why? #data #datagovernance #governance #dataspaces #intermediaries
-
🔷 Federated Governance: Balancing Control & Autonomy in the Cloud The biggest challenge in cloud and AI adoption isn’t technology it’s balancing central IT control with the autonomy engineering teams need to move fast. Fully centralised models slow innovation. Fully decentralised models create chaos. The solution is Federated Governance where central IT sets the guardrails, and product teams innovate freely inside a secure, pre-approved environment. Google Cloud’s folder → project hierarchy makes this model possible: Central IT controls: ✅ Org-level policies ✅ Identity & access models ✅ Security, networking, encryption ✅ Guardrails-as-code ✅ FinOps budgets & quotas Teams control: 🚀 CI/CD 🚀 Microservices 🚀 Runtime config 🚀 AI/ML deployment 🚀 Observability & SLOs This model gives you: • Fast delivery • Strong compliance • Clear ownership • Standardisation without bottlenecks • Innovation without risk Federated governance doesn’t reduce control it operationalises control in a way that accelerates the entire organisation. https://lnkd.in/gQ-NniYC #CloudGovernance #GoogleCloud #FederatedGovernance #AIGovernance #CCoE #PlatformEngineering #EnterpriseArchitecture #CAIO #CDO #DigitalTransformation
-
💡Design System Governance Models Design system governance models help organizations manage and maintain design systems across teams and products. There are three popular models—Solitary, Centralized, and Federated— each offer different approaches to how design systems are governed within an organization. 1️⃣ Solitary model (Standalone) In the solitary model, each team, project, or department creates and maintains its own design system independently. Benefits: ✔ Autonomy and flexibility: Teams can design for their unique needs without waiting for approvals or alignment. ✔ Quick iteration: Changes can be implemented without the need to coordinate with other teams. Downsides: ✔ Inconsistency: The lack of a unified system can lead to inconsistent user experiences across products. ✔ Duplication of effort: Different teams may end up solving the same problems in different ways, wasting resources. ✔ Lack of scalability: As the organization grows, maintaining multiple systems becomes inefficient and difficult to manage. Solitary model is best for early-stage startups or small organizations with highly specialized needs for products. 2️⃣ Centralized model In the centralized model, a single team (often a DesignOps) is responsible for creating, managing, and governing the design system. All teams within the organization must use this system. Benefits: ✔ Consistency: The centralized model ensures a uniform design language and experience across all products and platforms. ✔ Quality control: A central team ensures adherence to standards, best practices, and quality benchmarks. Downsides: ✔ Bottlenecks: The centralized team can become a bottleneck for requests, slowing down individual teams that need changes or new components. ✔ Limited customization: Teams with unique needs may find the centralized system too rigid or slow to adapt to their specific requirements. Centralized model is ideal for organizations seeking consistency and efficiency but may introduce bottlenecks and lack flexibility for individual teams. 3️⃣ Federated model In the federated model, multiple teams contribute to and maintain the design system. Benefits: ✔ Balanced flexibility and consistency: Teams can customize components to fit their needs while still adhering to a common design language and guidelines. ✔ Shared ownership: Teams feel more invested in the design system, increasing adoption and engagement across the organization. Downsides: ✔ Complex governance: Managing contributions from multiple teams can be challenging, especially in ensuring that changes align with the overall system’s vision and standards. ✔ Coordination overhead: Teams must coordinate their efforts to avoid duplication, miscommunication, or conflicting updates. Federated model balances flexibility and consistency, fostering collaboration, but requires robust governance and communication to avoid fragmentation. 🖼 Governance models by Nathan Curtis #design #UI #designsystem
-
The Holistic Approach: Combining #BusinessProcessManagement with Value and #PerformanceManagement, #EnterpriseArchitecture, #Governance, and SOA BPM, enterprise architecture, value management, and #ServiceOrientedArchitecture address similar topics, but from a different perspective, and enable different forms of performance and value creation: - Enterprise architecture (EA) focuses on setting the framework for the business design and sets in place standards, guidelines, policies, and procedures for ensuring the design, integrity, and, if identified and planned, performance, value creation, and realization for the business as a whole. - Business process management (BPM) focuses on the management of the business process lifecycle, outlining the way the organization can and will execute its competencies. True performance happens at the activity level, and therefore most form of value creation happens at this level. One of the real benefits of introducing BPM principles to your processes is that you can add the principle of continuous improvement to the process lifecycle. - Value management (VM) adds the concept of the value lifecycle form of value planning, value identification, value creation, and value realization, and bench- marks on the operational and strategic level and thereby identification of cost- cutting and improvement potential. Doing this improves the process lifecycle and EA setup. It also materializes the concept of operational excellence by add- ing characteristics and metrics used for setting up performance measurement. - Service-oriented architecture (SOA) focuses on providing the design principles for an application architecture based on reusable components (services) and a flexible orchestration layer, which are applied when performing the solution trans- formation from business process requirements to the supporting IT solution. - Governance focuses on continuously applying the principles in a structured and managed fashion. Governance is applied on all levels of the enterprise, and harmonization should be achieved between business, process, and IT governance. The different perspectives overlap on topic but not on content. They support each other, and by harmonizing the governance of these perspectives, they will add value to one another and improve the quality of the individual improvement cycles. The same governance principles should be applied to the business model, business process, value and performance management, and realization in the IT domain. Furthermore, harmonization of these perspectives aligns business and IT initiatives because they are based on common standards, policies, and procedures and a shared orientation on the business processes. Source: Excerpt from the book Applying Real World BPM in a SAP Environment, Author: Mark von Rosing Robert Eijpe Caspar Laar Ann Rosenberg Sascha Kuhlmann © via Raj Grover , https://lnkd.in/d6EQ5d8Y
-
🏗️ 𝗧𝗵𝗲 𝗵𝗶𝗱𝗱𝗲𝗻 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗼𝗳 𝗽𝗿𝗼𝗷𝗲𝗰𝘁 𝘀𝘂𝗰𝗰𝗲𝘀𝘀 After analysing hundreds of major project failures, researchers discovered something fascinating: the projects that succeeded weren't necessarily better planned, better funded, or led by more experienced teams. They had better governance architecture. 🔍 𝗧𝗵𝗲 𝗚𝗮𝗺𝗲-𝗖𝗵𝗮𝗻𝗴𝗲𝗿 The UK's Infrastructure and Projects Authority developed the "V Diagram"—a framework that maps governance across two dimensions simultaneously: 📅 Time dimension: Strategy → Planning → Delivery → Benefits 🏢 Organizational dimension: Corporate → Project → Delivery teams ⚡ 𝗪𝗵𝘆 𝗧𝗵𝗶𝘀 𝗠𝗮𝘁𝘁𝗲𝗿𝘀 Most project failures aren't technical—they're governance failures. Benefits get eroded through thousands of small decisions that drift from original objectives. The V Diagram creates "governance continuity"—ensuring every decision, at every level, stays aligned with strategic intent. 🎯 𝗧𝗵𝗲 𝗥𝗲𝘀𝘂𝗹𝘁𝘀 Organizations using this approach report: ✅ 40% reduction in scope creep ✅ 60% faster decision-making ✅ 85% improvement in stakeholder alignment ✅ Better benefits realization 💡 𝗞𝗲𝘆 𝗜𝗻𝘀𝗶𝗴𝗵𝘁 Governance isn't a layer you add to projects—it's the connective tissue that links all decisions to strategic outcomes. 🤔 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝘆𝗼𝘂: How often have you seen projects deliver on time and budget but fail to achieve their transformational impact? The framework is freely available through the UK Government's Project Routemap initiative. What's your biggest governance challenge? 👇 Source: https://lnkd.in/ecZ2kFxR #𝗣𝗿𝗼𝗷𝗲𝗰𝘁𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 #𝗕𝗲𝗻𝗲𝗳𝗶𝘁𝘀𝗥𝗲𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 #𝗣𝗿𝗼𝗷𝗲𝗰𝘁𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 #Leadership
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development