Key Metrics for Evaluating Workflow Success

Explore top LinkedIn content from expert professionals.

Summary

Key metrics for evaluating workflow success help organizations understand whether their processes are truly delivering value, not just being completed on schedule. By measuring factors like adoption, impact, and user satisfaction, teams can tell if their workflow changes are making a real difference in day-to-day operations.

  • Track adoption rates: Pay attention to how many people are actively using new processes or tools, not just if they were launched.
  • Assess impact: Look for improvements in quality, productivity, or efficiency to see if workflows are translating into real results.
  • Measure user experience: Gather feedback to find out if workflows make tasks easier and help people do their jobs better.
Summarized by AI based on LinkedIn member posts
  • View profile for Chris Clevenger

    Leadership • Team Building • Leadership Development • Team Leadership • Lean Manufacturing • Continuous Improvement • Change Management • Employee Engagement • Teamwork • Operations Management

    33,832 followers

    "You can’t manage what you don’t measure." Yet, when it comes to change management, most leaders focus on what was implemented rather than what actually changed. Early in my career, I rolled out a company-wide process improvement initiative. On paper, everything looked great - we met deadlines, trained employees, and ticked every box. But six months later, nothing had actually changed. The old ways crept back, employees reverted to previous habits, and leadership questioned why results didn’t match expectations. The problem? We measured completion, not adoption. 𝗖𝗼𝗻𝗰𝗲𝗿𝗻: Many organizations struggle to gauge whether change efforts truly make an impact because they rely on surface-level indicators: → Completion rates instead of adoption rates → Project timelines instead of performance improvements → Implementation checklists instead of employee sentiment This approach creates a dangerous illusion of progress while real behaviors remain unchanged. 𝗖𝗮𝘂𝘀𝗲: Why does this happen? Because leaders focus on execution instead of outcomes. Common pitfalls include: → Lack of accountability – No one tracks whether new processes are being followed. → Insufficient feedback loops – Employees don’t have a voice in measuring what works. → Over-reliance on compliance – Just because something is mandatory doesn’t mean it’s effective. If we want real, measurable change, we need to rethink what success looks like. 𝗖𝗼𝘂𝗻𝘁𝗲𝗿𝗺𝗲𝗮𝘀𝘂𝗿𝗲: The solution? Focus on three key change management success metrics: → 𝗔𝗱𝗼𝗽𝘁𝗶𝗼𝗻 𝗥𝗮𝘁𝗲 – How many employees are actively using the new system or process? → 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗜𝗺𝗽𝗮𝗰𝘁 – How has efficiency, quality, or productivity changed? → 𝗨𝘀𝗲𝗿 𝗦𝗮𝘁𝗶𝘀𝗳𝗮𝗰𝘁𝗶𝗼𝗻 – Do employees feel the change has made their work easier or harder? By shifting from "Did we implement the change?" to "Is the change delivering results?", we turn short-term projects into long-term transformation. 𝗕𝗲𝗻𝗲𝗳𝗶𝘁𝘀: Organizations that measure change effectively see: → Higher engagement – Employees feel heard, leading to stronger buy-in. → Stronger accountability – Leaders track impact, not just completion. → Sustained improvement – Change becomes embedded in the culture, not just a temporary initiative. "Change isn’t a box to check—it’s a shift to sustain. Measure adoption, not just action, and you’ll see the impact last." How does your organization measure the success of change initiatives? If you’ve used adoption rate, performance impact, or user satisfaction, which one made the biggest difference for you? Wishing you a productive, insightful, and rewarding Tuesday! Chris Clevenger #ChangeManagement #Leadership #ContinuousImprovement #Innovation #Accountability

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    720,651 followers

    Over the last year, I’ve seen many people fall into the same trap: They launch an AI-powered agent (chatbot, assistant, support tool, etc.)… But only track surface-level KPIs — like response time or number of users. That’s not enough. To create AI systems that actually deliver value, we need 𝗵𝗼𝗹𝗶𝘀𝘁𝗶𝗰, 𝗵𝘂𝗺𝗮𝗻-𝗰𝗲𝗻𝘁𝗿𝗶𝗰 𝗺𝗲𝘁𝗿𝗶𝗰𝘀 that reflect: • User trust • Task success • Business impact • Experience quality    This infographic highlights 15 𝘦𝘴𝘴𝘦𝘯𝘵𝘪𝘢𝘭 dimensions to consider: ↳ 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲 𝗔𝗰𝗰𝘂𝗿𝗮𝗰𝘆 — Are your AI answers actually useful and correct? ↳ 𝗧𝗮𝘀𝗸 𝗖𝗼𝗺𝗽𝗹𝗲𝘁𝗶𝗼𝗻 𝗥𝗮𝘁𝗲 — Can the agent complete full workflows, not just answer trivia? ↳ 𝗟𝗮𝘁𝗲𝗻𝗰𝘆 — Response speed still matters, especially in production. ↳ 𝗨𝘀𝗲𝗿 𝗘𝗻𝗴𝗮𝗴𝗲𝗺𝗲𝗻𝘁 — How often are users returning or interacting meaningfully? ↳ 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 𝗥𝗮𝘁𝗲 — Did the user achieve their goal? This is your north star. ↳ 𝗘𝗿𝗿𝗼𝗿 𝗥𝗮𝘁𝗲 — Irrelevant or wrong responses? That’s friction. ↳ 𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗗𝘂𝗿𝗮𝘁𝗶𝗼𝗻 — Longer isn’t always better — it depends on the goal. ↳ 𝗨𝘀𝗲𝗿 𝗥𝗲𝘁𝗲𝗻𝘁𝗶𝗼𝗻 — Are users coming back 𝘢𝘧𝘵𝘦𝘳 the first experience? ↳ 𝗖𝗼𝘀𝘁 𝗽𝗲𝗿 𝗜𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝗼𝗻 — Especially critical at scale. Budget-wise agents win. ↳ 𝗖𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻 𝗗𝗲𝗽𝘁𝗵 — Can the agent handle follow-ups and multi-turn dialogue? ↳ 𝗨𝘀𝗲𝗿 𝗦𝗮𝘁𝗶𝘀𝗳𝗮𝗰𝘁𝗶𝗼𝗻 𝗦𝗰𝗼𝗿𝗲 — Feedback from actual users is gold. ↳ 𝗖𝗼𝗻𝘁𝗲𝘅𝘁𝘂𝗮𝗹 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 — Can your AI 𝘳𝘦𝘮𝘦𝘮𝘣𝘦𝘳 𝘢𝘯𝘥 𝘳𝘦𝘧𝘦𝘳 to earlier inputs? ↳ 𝗦𝗰𝗮𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆 — Can it handle volume 𝘸𝘪𝘵𝘩𝘰𝘶𝘵 degrading performance? ↳ 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆 — This is key for RAG-based agents. ↳ 𝗔𝗱𝗮𝗽𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗦𝗰𝗼𝗿𝗲 — Is your AI learning and improving over time? If you're building or managing AI agents — bookmark this. Whether it's a support bot, GenAI assistant, or a multi-agent system — these are the metrics that will shape real-world success. 𝗗𝗶𝗱 𝗜 𝗺𝗶𝘀𝘀 𝗮𝗻𝘆 𝗰𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗼𝗻𝗲𝘀 𝘆𝗼𝘂 𝘂𝘀𝗲 𝗶𝗻 𝘆𝗼𝘂𝗿 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝘀? Let’s make this list even stronger — drop your thoughts 👇

  • View profile for Gayatri Agrawal

    Building AI transformation company @ ALTRD

    35,845 followers

    Everyone’s excited to launch AI agents. Almost no one knows how to measure if they’re actually working. Over the last year, we’ve seen brands launch everything from GenAI assistants to support bots to creative copilots but the post-launch metrics often look like this: • Number of chats • Average latency • Session duration • Daily active users Useful? Yes. But sufficient? Not even close. At ALTRD, we’ve worked on AI agents for enterprises and if there’s one lesson it’s this: Speed and usage mean nothing if the agent isn’t solving the actual problem. The real performance indicators are far more nuanced. Here’s what we’ve learned to track instead: 🔹 Task Completion Rate — Can the AI go beyond answering a question and actually complete a workflow? 🔹 User Trust — Do people come back? Do they feel confident relying on the agent again? 🔹 Conversation Depth — Is the agent handling complex, multi-turn exchanges with consistency? 🔹 Context Retention — Can it remember prior interactions and respond accordingly? 🔹 Cost per Successful Interaction — Not just cost per query, but cost per outcome. Massive difference. One of our clients initially celebrated their bot’s 1 million+ sessions - until we uncovered that less than 8% of users actually got what they came for. That 8% wasn’t a usage issue. It was a design and evaluation issue. They had optimized for traffic. Not trust. Not success. Not satisfaction. So we rebuilt the evaluation framework - adding feedback loops, success markers, and goal-completion metrics. The results? CSAT up by 34% Drop-off down by 40% Same infra cost, 3x more value delivered The takeaway: Don’t just measure what’s easy. Measure what matters. AI agents aren’t just tools - they’re touchpoints. They represent your brand, shape user experience, and influence business outcomes. P.S. What’s one underrated metric you’ve used to evaluate AI performance? Curious to learn what others are tracking.

  • View profile for Mike Rizzo

    Certifying the future of GTM professionals. Community-led Founder & CEO @ MarketingOps.com and MO Pros® - where 4,000+ Marketing Operations, GTM Ops, and Revenue Ops professionals architect revenue growth.

    19,750 followers

    If the only metric your exec team cares about is pipeline created, Then they’re not seeing the full picture. C-level dashboards do tell a story. I agree, But usually the wrong one, at the wrong resolution, with the wrong cause-and-effect logic. And then... they ask Marketing Ops to “make the numbers better.” → Without changing the inputs. → Without cleaning the data. → Without aligning the teams. Here’s what you should be tracking instead → Not just pipeline velocity—pipeline quality → Not just cost per lead—cost per aligned buyer → Not just attribution—contribution clarity 3 Metrics Marketing Ops Should Own (And Execs Need to Learn How to Interpret): 1. Lag-to-Lead Time How long does it take from first lead capture to actual opportunity creation? If it’s bloated, no campaign will fix it. → Root cause: CRM architecture, scoring logic, lack of sales follow-up rhythm. 2. Operational Win Rate Forget sales win rate. Measure the qualified ops-to-closed ratio for GTM feedback. This tells you: Are we targeting the right personas? Are we delivering them in the right stage of readiness? 3. System Hygiene Score This isn’t sexy, but it saves millions in burn: % of contacts with missing data % of workflows with broken logic % of platforms not integrated with the source of truth Ops shouldn’t just report on performance. We should report on the system that delivers performance. You can’t scale what you can’t explain. And you can’t explain what you refuse to measure. It’s time we stop dumbing down dashboards and start training up leadership. #MarketingOps #RevOps #MetricsThatMatter #GTMStrategy #OpsLeadership #ExecutiveReporting

  • View profile for Sol Rashidi, MBA
    Sol Rashidi, MBA Sol Rashidi, MBA is an Influencer
    113,072 followers

    Here's what I'm seeing everywhere: AI is making teams faster, but are we making them stronger? AI is making us more productive, but are we becoming more capable? We might be able to do more, but is the ‘more’ translating to ‘more valuable’? And traditional metrics can't tell the difference. That’s why after having done over 200+ deployments, and observing the same issues, combined with recent stats that 88% of all AI projects stall/cancel/pause at POC, and MIT recently stated only 5% of GenAI projects succeed, I invented The Human Amplification Index™ (© 2025 Sol Rashidi. All rights reserved.). We need a way to measure whether AI is making our business and people more valuable or just making them busier. Here's what the product tracks: 𝟭. 𝗪𝗲 𝗺𝗲𝗮𝘀𝘂𝗿𝗲 𝘁𝗵𝗲 𝘀𝘁𝗿𝗲𝗻𝗴𝘁𝗵 𝗼𝗳 𝘆𝗼𝘂𝗿 𝘄𝗼𝗿𝗸𝗙𝗨𝗡𝗖𝗧𝗜𝗢𝗡™ 𝗯𝗲𝗳𝗼𝗿𝗲 𝗮𝗻𝗱 𝗮𝗳𝘁𝗲𝗿 𝗔𝗜 (© 2025 Sol Rashidi. All rights reserved.). It tells you how much of your team's time is spent on what they were actually hired to do? Most teams I assess are operating at 40-60% of their intended function. The rest? Emergency fixes, escalations, triaging, broken process workarounds, administrative busy work that has nothing to do with their core expertise. Before you implement AI, measure this baseline. Then track how AI shifts this equation. 𝟮. 𝗪𝗲 𝗺𝗲𝗮𝘀𝘂𝗿𝗲 𝘁𝗵𝗲 𝘀𝘁𝗿𝗲𝗻𝗴𝘁𝗵 𝗼𝗳 𝘆𝗼𝘂𝗿 𝘄𝗼𝗿𝗸𝗙𝗟𝗢𝗪™ 𝗲𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆 𝗯𝗲𝗳𝗼𝗿𝗲 𝗮𝗻𝗱 𝗮𝗳𝘁𝗲𝗿 𝗔𝗜 (© 2025 Sol Rashidi. All rights reserved.) This isn't about speed, it's about friction. - How many hoops do your people jump through to complete basic tasks? - How many disconnected tools do they toggle between? - How much manual work exists because systems don't talk to each other? AI should remove friction, not just accelerate it. So build a baseline and measure how it improves with AI 𝟯. 𝗪𝗲 𝗺𝗲𝗮𝘀𝘂𝗿𝗲 𝘁𝗵𝗲 𝘀𝘁𝗿𝗲𝗻𝗴𝘁𝗵 𝗼𝗳 𝘆𝗼𝘂𝗿 𝘄𝗼𝗿𝗸𝗙𝗢𝗥𝗖𝗘™ 𝗯𝗲𝗳𝗼𝗿𝗲 𝗮𝗻𝗱 𝗮𝗳𝘁𝗲𝗿 𝗔𝗜 (© 2025 Sol Rashidi. All rights reserved.). When you hired each person, you saw the unique value they could bring. How much of that potential are you actually accessing? If AI is handling routine tasks but your people are still stuck in the weeds instead of contributing their highest-value thinking, you've got an amplification problem. The companies that figure this out will separate themselves dramatically from those that don't. While most leaders are asking "Are we more efficient?" The better question is: "Are our people able to contribute more of their unique human value because AI is handling everything else?" When you measure work function strength, workflow efficiency, and workforce amplification, you're measuring your true capacity for sustainable growth. That's the difference between using AI as a tool and using AI to amplify human potential. What's your experience? Are your teams becoming more capable, or just busier?

  • View profile for Brianna Bentler

    I help owners and coaches start with AI | AI news you can use | Women in AI

    15,080 followers

    Your business doesn’t need another chatbot. It needs an agent that owns a result. Most teams bought “answers.” Operators need outcomes. Agentic AI isn’t Q&A. It’s plan → act → check → escalate until done. Start where it pays back fast: one workflow with a clear finish line. Missed-call follow-up. Intake routing. Weekly ops recap. System (operator edition): ✅ Role & goal: one job, one KPI (ex: reduce exceptions to <15%) ✅ Tools: the 3–5 it must touch (CRM, docs, email/SMS, ledger, search) ✅ Guardrails: rate limits, retries, human stop, audit log ✅ Memory: retrieval from approved sources with permissions ✅ Loop: plan → act → verify → write the record ✅ Escalation: “can’t complete” triggers owner + context bundle Proof you can measure (beyond “time saved”): ✅ Reasoning accuracy (grounded & cited) ✅ Autonomy rate vs. human handoffs ✅ Cycle time per case, not per click ✅ CX deltas: fewer repeat questions, faster resolutions Build vs. buy vs. hybrid is a platform call, not a tool swipe. If your APIs, logging, and sandbox aren’t ready, pilot first small scope, real metric. New habits for managers: ✅ Assign an owner per flow ✅ Set a pass bar before go-live ✅ Review exceptions weekly, promote what works Bottom line: move from “answers in threads” to “outcomes in systems.” Artifact or it didn’t happen: if the agent didn’t write to the system of record, it didn’t ship.

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer

    Practical insights for better UX • Running “Measure UX” and “Design Patterns For AI” • Founder of SmashingMag • Speaker • Loves writing, checklists and running workshops on UX. 🍣

    225,936 followers

    🔮 UX Metrics and KPIs Cheatsheet (Figma) (https://lnkd.in/en9MK4MD), a helpful reference sheet for UX metrics, with formulas and examples — for brand score, desirability, loyalty, satisfaction, sentiment, success, usefulness and many others. Neatly put together in one single place by fine folks at Helio Glare. To me personally, measuring UX success is focused around just a few key attributes — how successful users are in completing their key tasks, how many errors users experience along the way and how quickly users get through onboarding to first meaningful success. The context of the project will of course request specific, custom metrics — e.g. search quality score, or brand score, or engagement score or loyalty — but UX metrics are all around delivering value to users through their successes. Here are some examples: 1. Top tasks success > 80% (for critical tasks) 2. Time to complete top tasks < Xs (for critical tasks) 3. Time to first success < 90s (for onboarding) 4. Time to candidates < 120s (nav + filtering in eCommerce) 5. Time to top candidate < 120s (for feature comparison) 6. Time to hit the limit of a free tier < 7d (for upgrades) 7. Presets/templates usage > 80% per user (to boost efficiency) 8. Filters used per session > 5 per user (quality of filtering) 9. Feature adoption rate > 30% (usage of a new feature per user) 10. Feature retention rate > 40% (after 90 days) 11. Time to pricing quote < 2 weeks (for B2B systems) 12. Application processing time < 2 weeks (online banking) 13. Default settings correction < 10% (quality of defaults) 14. Relevance of top 100 search queries > 80% (for top 5 results) 15. Service desk inquiries < 35/week (poor design → more inquiries) 16. Form input accuracy ≈ 100% (user input in forms) 17. Frequency of errors < 3/visit (mistaps, double-clicks) 18. Password recovery frequency < 5% per user (for auth) 19. Fake email addresses < 5% (newsletters) 20. Helpdesk follow-up rate < 4% (quality of service desk replies) 21. “Turn-around” score < 1 week (frustrated users -> happy users) 22. Environmental impact < 0.3g/page request (sustainability) 23. Frustration score < 10% (AUS + SUS/SUPR-Q) 24. System Usability Scale > 75 (usability) 25. Accessible Usability Scale (AUS) > 75 (accessibility) Each team works with 3–4 design KPIs that reflect the impact of their work. Search team works with search quality score, onboarding team works with time to success, authentication team works with password recovery rate. What gets measured, gets better. And it gives you the data you need to monitor and visualize the impact of your design work. Once it becomes a second nature of your process, not only will you have an easier time for getting buy-in, but also build enough trust to boost UX in a company with low UX maturity. [continues in comments ↓] #ux #design

  • View profile for Daniel Lock

    Change Director & Helping senior professionals turn their expertise into authority that pays.

    36,545 followers

    Everyone says “change is happening” But how do you know it’s actually working? Change initiatives are easy to start. Harder to measure. Without clear indicators, leaders guess if progress is real And guesswork rarely works. Top change leaders track these metrics to stay ahead: 1/ Achievement → How close did we get to our change goals. → Focus on learning first, then performance. Example: % of project milestones met vs. planned. 2/ Completion → How well did we execute on schedule, scope, and budget. → Example: Tasks finished on time and within budget. 3/ Acceptability → Stakeholder satisfaction with the process and solution. → Example: Survey scores, qualitative feedback. 4/ Engagement → How involved are teams and stakeholders in the change. → Example: Attendance in workshops, participation in feedback sessions. 5/ Adoption → Are people actually using new systems, behaviors, or processes. → Example: % of employees actively using a new tool or workflow. 6/ Sustainability → Are changes sticking over time or fading. → Example: Reassess behaviors 3–6 months post-change. 7/ Impact → The measurable difference on business outcomes. → Example: Efficiency gains, revenue growth, or error reduction. Stop hoping for progress. Start proving it. P.S. Which of these metrics do you track most closely in your change initiatives? -- 📌 If you want a high-res PDF of this sheet:   1. Follow Daniel Lock 2. Like the post 3. Repost to your network 4. Subscribe to: https://lnkd.in/eB3C76jb

  • View profile for Ibrahim Elkishky

    Quality Manager | Performance Management | KPIs Implementation | Project Management | Change Management | Process Optimization | Strategic Planning | Lean Manufacturing

    9,841 followers

    Quality metrics are fundamental to effective quality management systems, shaping business performance. Here are six key metrics to enhance your operations: 1. Quality Rate: - Percentage of products/services meeting quality standards - High rate indicates effective processes and satisfied customers - Low rate signals improvement opportunities 2. Rework Percentage: - Proportion of work requiring redoing due to defects/errors - High percentage highlights process inefficiencies, leading to increased costs and resource wastage 3. Defective Parts Per Million (DPPM): - Quantifies defective parts in a million produced - Vital for manufacturers to spot defect trends and enhance production processes 4. Defects Per Million Opportunities (DPMO): - Considers total defect opportunities for a comprehensive quality assessment - Helps organizations target specific areas for improvement and assess processes holistically 5. Process Capability: - Evaluates process output within defined limits - Aids in maintaining process consistency and meeting customer demands effectively 6. Process Capability Index (Cpk): - Enhances process capability analysis by measuring centredness within specifications - Higher Cpk signifies better performance and reduced process variability These metrics play a crucial role in driving continuous improvement and ensuring operational excellence. #QualityManagement #BusinessPerformance

  • View profile for Karen Martin

    Business Performance Improvement | Operational Excellence | Lean Management | Strategy Deployment | Value Stream Transformation | Award-winning Author | Keynote Speaker | SaaS Founder

    16,973 followers

    This is a follow-on to my post last week about reducing friction to reduce the chaos that results from it. One of the most common forms of friction is the quality of the work begin passed from one person or work team to another. Measuring quality every step of the way in a process is vital for seeing the truth about how processes and work systems have been designed—and where to focus one's efforts to reduce friction. %C&A is the metric we call "the little beast." It stands for "Percent Complete & Accurate." Many of you already know about this metric since I frequently reference it in all of my books, and in many posts, keynotes, and @TKMG-Academy courses. The metric is obtained by asking downstream recipients of work, whether electronic/informational or physical products, what percentage of time they can complete whatever they're supposed to do without engaging in any form of rework. The three forms of rework are: 🔸Correcting information or physical product due to error/mistake/defect 🔸Adding missing information that should/could have been supplied. 🔸Clarifying information that should/could have been clearer to begin with. Example: If someone reports that they engage in any of these three forms of rework in approximately 3 out of 10 instances of receiving work (or 30 out of 100, etc), the %C&A for the upstream process is 70%. But . . . here's where the "little beast" raises its head. VERY often, people report VERY low %C&As . . . As in 10% quality received—or 0%, meaning 100% rework! Poor quality can be coming from external customers or internal teams as you can see in the image below. The people in Step 13 said the customer never provides 100% quality and those in Step 14 (not shown) said that they have to rework 8 out of 10 "work items" (information, in this case) received from Step 13. When cross-functional teams incorporate this metric into mapping processes or value streams (it applies to both levels of scrutiny), it's game-changing. First, most people delivering work have no idea that what they're delivering doesn't meet the criteria established by the recipients of the work. Because they've never had the conversation. Second, we find the most interpersonal or interdepartmental tension isn't due to the people involved. It's typically VERY closely tied to this metric and the frustration that results from repeatedly having to do non-value-adding work that people believe were someone else's responsibility. So, while this metric is the "little beast," it's also the most healing aspect of the work we do. Eliminating low %C&A creates significantly better working relationships, reduces stress, speeds delivery, and costs less. Give it a whirl. I'm happy to answer your questions. For those of you who have incorporated this metrics into your work, please share your stories of how it's helped solve business problems and perceived people problems that are merely work design problems. I'll add my comment below.

Explore categories