You're a #CTO. Your board asks: "What's our ROI on AI coding tools?" Your answer: "40% of our code is AI-generated!" They respond: "So what? Are we shipping faster? Are customers happier?" Most CTOs are measuring AI impact completely wrong. Here's what some are tracking: - Percentage of AI-generated code - Developer hours saved per week - Lines of code produced - AI tool adoption rates These metrics are like measuring how fast your assembly line workers attach parts while ignoring whether your cars actually start. Here's what you SHOULD measure instead: 1. Delivered business value 2. Customer cycle time 3. Development throughput 4. Quality and reliability 5. Total cost of delivery (not just development) 6. Team satisfaction Software development isn't a typing competition—it's a complex system. If AI makes your developers 30% faster but your deployment takes 2 weeks and QA adds another week, your customer delivery improves by maybe 7%. You've speed up the wrong part. The solution: A/B test your teams. Give half your teams AI tools, measure business outcomes over 2-3 release cycles. Track what customers actually experience, not how much developers produce. Companies that measure business impact from AI will pull ahead. Those measuring vanity metrics will wonder why their expensive tools aren't moving the needle. Stop measuring how much code AI generates. Start measuring how much faster you deliver value to customers. What are you actually measuring? And is it moving your business forward? -> Follow me for more about building great tech organizations at scale. More insights in my book "All Hands on Tech"
Developer Productivity Metrics
Explore top LinkedIn content from expert professionals.
-
-
Has Amazon cracked the code on developer productivity with its cost to serve software (CTS-SW) metric? Amazon applied its well-known "working backwards" methodology to developer productivity. "Working backwards" in this case starting with the outcome: concrete returns for the business. This is measured by looking at the rate of customer-facing changes delivered by developers, i.e. "what the team deems valuable enough to review, merge, deploy, and support for customers", in the words of the blog post by Jim Haughwout https://lnkd.in/eqvW5wbi . This metric is different from other measures of developer productivity which look only at velocity or time saved. Instead, "CTS-SW directly links investments in the developer experience to those outcomes by assessing how frequently we deliver new or better experiences. Some organizations fall into the anti-pattern of calculating minutes saved to measure value, but that approach isn’t customer-centered and doesn’t prove value creation." This aligns with Gartner's own research on developer productivity. In our 2024 Software Engineering survey, we asked what productivity metric organizations are using to measure their developers. We also asked about a basket of ten success metrics, including software usability, retention of top performers, and meeting security standards. This allowed us to find out which productivity metric was associated most with success. What we found in our survey was that *rate of customer-facing changes* is the metric most associated with success. Some other productivity metrics were actually *negative associated* with success. But *rate of customer-facing changes* is what organizations should focus on. Sadly, our survey found that few organizations (just 22%) use this metric. I presented this data at our #GartnerApps summit [and the next summit is coming up in September: https://lnkd.in/ey2kpc2 ] Every metrics gets gamed. So I always recommend "gaming the gaming". A developer might game the CTS-SW metric by focusing more on customer-facing changes. But... this is actually a good thing. You're gaming the gaming. We will be watching closely how this metric gets adopted alongside DORA, SPACE, and other metrics in the industry.
-
In many Chinese schools, students pause class for 1–3 minutes and move together — inside the classroom. Are you taking breaks during your office hours? Not a dance. Not military. System design. It’s called 广播体操 (Radio Calisthenics) and it’s been used nationally for decades to reset posture, circulation, and attention. • Prolonged sitting reduces cognitive performance after 30–40 minutes • Short movement breaks improve focus and working memory by 10–15% • Light physical activity increases blood flow to the brain by up to 20% • Even 2 minutes of movement measurably reduces mental fatigue Now apply this to tech and business. Knowledge workers sit 9–11 hours/day, live in back-to-back video calls, and are expected to make high-quality decisions at speed. That’s not a productivity issue. It’s a human-system mismatch. As AI scales execution, human attention becomes the bottleneck. The next performance upgrade may not be more software — but movement designed into workflows. China implemented it at national scale. Optimize the human. Then optimize the system. #FutureOfWork #AI #Productivity #Leadership #HumanPerformance #Neuroscience #TechLeadership #DigitalTransformation #WorkplaceDesign #CognitivePerformance
-
Stop chasing waterfall (and vanity metrics)! Forget vanity metrics and focus on 4 simple Flow Metrics. Vanity metrics like velocity or the number of commits or pull request reviews by developer, can do more harm than good. "What gets measured, gets managed" Which means, what gets measured gets gamed - and developers are some really smart people who quickly learn to game the system. Flow Metrics are in your system anyway and can help you create a better narrative around metrics. You are not measuring individual contributions. You are not comparing one team with another. You simply want to create a more stable and system - by improving the flow of work. Here are the 4 Flow Metrics: -> Work In Progress: The number of work items started but not finished. Too much WIP? Expect delays, context-switching, and all the madness that follows. ->Throughput: The number of work items finished per unit of time. Think of it as a speedometer for value delivery. -> Work Item Age: The amount of elapsed time between when a work item started and the current time. High values here? Work is probably waiting around longer than it’s getting done. A crucial measure for predictability. -> Cycle Time: The amount of elapsed time between when a work item started and when a work item finished. How long work takes from start to finish - gives you an idea to determine "when it will be done" Follow me for more tips on improving your ways of working!
-
𝐓𝐡𝐞 "𝐁𝐮𝐬𝐲" 𝐓𝐫𝐚𝐩: 𝐖𝐡𝐲 𝐘𝐨𝐮𝐫 𝐁𝐞𝐬𝐭 𝐖𝐨𝐫𝐤 𝐑𝐞𝐪𝐮𝐢𝐫𝐞𝐬 𝐘𝐨𝐮𝐫 𝐀𝐛𝐬𝐞𝐧𝐜𝐞 For the longest time, I viewed "taking a break" as a sign of slowing down. I thought if I wasn't constantly tethered to my notifications or strategizing the next framework, I was losing momentum. We tend to ignore the quiet whispers of burnout until they become a roar. We tell ourselves "just one more week" or "after this project," not realizing that a tired mind cannot innovate; it can only replicate. Last week, I finally put the "Out of Office" on and traded my screen for the shoreline. There is something transformative about the ocean. Watching the tide reminded me that life has a natural rhythm of receding and returning. I spent my days disconnected from the digital world and reconnected with the physical one—the warmth of the sand, the sound of the waves, and the clarity of silence. The result? My energy didn't just return; it doubled. I arrived back at my desk this morning, and the timing couldn't have been more intense. A high-stakes, incredibly challenging project was waiting for me on day one. Six days ago, that project might have felt overwhelming. Today? My headspace is clear. My perspective is fresh. I’m not just ready to tackle the challenge; I’m excited to lead it. The Lesson: Productivity isn't about how many hours you sit at your desk; it’s about the quality of the energy you bring to those hours. If you’re waiting for the "right time" to take a breath—this is your sign. Go find your version of the beach. Your work (and your well-being) will thank you when you get back. #Leadership #WorkLifeBalance #MentalHealth #BurnoutPrevention #Productivity #PeopleFirst
-
How do you measure developer productivity? It's always been a tricky thing for me, but I think this 👇🏼 could be a good solution. According to Emilio Salvador, VP of Developer Relations & Community at GitLab, 3 metrics are key: 1. Task-based: it's not about measuring the NUMBER of tasks a developer completes, but about the TYPE of tasks completed. Some tasks require advanced critical thinking or outside-the-box thinking. They should be evaluated as such. 2. Time-based: measuring the time needed to complete tasks and release features. Using Google's DORA framework to measure both production and deployment times helps identify process weaknesses and bottlenecks. 3. Team-based: no developer works isolated from their colleagues. Measuring the team's delivery performance in terms of business outcomes gives an indication on the productivity of developers. -- Combining these 3 metrics would help engineering managers have a broader view on how the work environment is helping/hindering developer productivity. I would add one more "human" dimension: how the presence of a developer in a team affects the whole team. Factors such as helping teammates, coming up with new concepts, or being proactive on process enhancement count towards a developer's productivity. How do you go about measuring productivity in your company? Share your thoughts!
-
My work is very busy at present. I have a demanding schedule of coaching appointments, workshops, webinars, and learning design deliveries, as well as administrative tasks. So I took yesterday off to ski. Stepping away regularly from work isn't just enjoyable; it’s essential. Research shows that intentional breaks — especially active ones — deliver powerful benefits that enhance our performance and well-being: • 𝗖𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝗿𝗲𝗰𝗼𝘃𝗲𝗿𝘆: Our brains operate on an attention budget that depletes throughout the workday (you may notice, for example, that you are more capable of focused productivity in the morning than at the end of the day). Even brief breaks can replenish this resource. During physical activity, different neural pathways activate, allowing overused cognitive circuits to recover — like resting one muscle group while working another. • 𝗠𝗲𝗻𝘁𝗮𝗹 𝘄𝗲𝗹𝗹-𝗯𝗲𝗶𝗻𝗴: Breaks function to interrupt the cycle of stress accumulation. Physical activity in particular triggers endorphin release and reduces cortisol levels, creating a neurochemical reset. Research from Wendsche et al. published in the Journal of Applied Psychology found that regular work breaks were consistently associated with lower levels of reported burnout symptoms. • 𝗣𝗵𝘆𝘀𝗶𝗰𝗮𝗹 𝗿𝗲𝗷𝘂𝘃𝗲𝗻𝗮𝘁𝗶𝗼𝗻: Studies in occupational health show that the extended periods of continuous sitting that characterize professional work negatively impact cardiovascular health and metabolism. Active breaks counteract these effects by improving circulation, reducing inflammation markers, and maintaining insulin sensitivity — benefits that persist when you return to work. • 𝗣𝗲𝗿𝘀𝗽𝗲𝗰𝘁𝗶𝘃𝗲 𝘀𝗵𝗶𝗳𝘁: Psychological distance from problems activates different regions of the prefrontal cortex. This mental space triggers an incubation effect wherein our subconscious continues problem-solving while our conscious mind engages elsewhere. Many report solutions crystallizing during or immediately after breaks. • 𝗖𝗿𝗲𝗮𝘁𝗶𝘃𝗶𝘁𝘆 𝗯𝗼𝗼𝘀𝘁: Research published in the Journal of Experimental Psychology found that walking increases creative ideation by up to 60%. Additionally, exposure to novel environments (like mountain vistas) activates the brain's novelty-recognition systems, priming it for innovative thinking. • 𝗘𝗻𝗵𝗮𝗻𝗰𝗲𝗱 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝘃𝗶𝘁𝘆: A study in the journal Cognition found that brief diversions improve focus during extended tasks. Research from Microsoft’s Human Factors Lab revealed that employees who incorporated strategic breaks completed projects 40% faster with fewer errors than those who worked straight through. The irony? Many of us avoid breaks precisely when we need them most. That urgent project, deadline pressure, or busy season seems to demand constant attention, yet this is exactly when a brief disconnect delivers the greatest return. #WorkLifeBalance #Productivity #Wellbeing
-
The most underrated productivity hack? Taking breaks. But not just any break. Science says there’s a right way to do it. Here’s how to restore your energy (and do better work) in 5 proven steps: Rule 1: Something > nothing Even short breaks matter. Try the 20-20-20 rule: → Every 20 minutes, look at something 20 feet away for 20 seconds. You’ll reduce fatigue and give your brain a much-needed pause. Micro-breaks add up. Rule 2: Moving > stationary A walk beats a sit. Movement restores energy and improves mood. Just getting up and walking a few minutes can refresh your mind for your next task. Rule 3: Social > solo Breaks with people restore us more than breaks alone even if you’re introverted. Chat with a colleague. Call a friend. Grab coffee with someone you like. Connection is a powerful recharge. Rule 4: Outside > inside Nature boosts energy and creativity. You don’t need to hike a mountain just walk down a street with trees. Studies show even light exposure to green space can reduce stress and elevate performance. Rule 5: Detached > distracted A break isn’t scrolling Instagram. Leave your phone behind. Log off. Step away. Real breaks require real detachment. Let your brain breathe. Try this break formula: Every afternoon, take a 15-minute walk outside With someone you like Talking about anything except work Without your phone Do it daily. Schedule it like a meeting.
-
𝗔𝗜 𝗜𝗦 𝗔𝗟𝗥𝗘𝗔𝗗𝗬 𝗪𝗥𝗜𝗧𝗜𝗡𝗚 𝗬𝗢𝗨𝗥 𝗖𝗢𝗗𝗘 🚀 But here’s the real question: Who’s measuring what it actually delivers? 𝗧𝗛𝗘 𝗣𝗥𝗢𝗕𝗟𝗘 👇 AI adoption is exploding. → AI agents are generating code daily → Teams are spending heavily on AI tools → More commits are AI-assisted than ever But almost no one can answer: → Is this code reaching production? → Is it improving output or just increasing noise? → Is the cost actually worth it? 𝗧𝗛𝗔𝗧’𝗦 𝗧𝗛𝗘 𝗚𝗔𝗣 𝗪𝗔𝗬𝗗𝗘𝗩 𝗦𝗢𝗟𝗩𝗘𝗦 ⚙️ Waydev is the measurement layer for AI-written code— tracking AI across the entire SDLC. 𝗛𝗘𝗥𝗘’𝗦 𝗪𝗛𝗔𝗧 𝗜𝗧 𝗗𝗘𝗟𝗜𝗩𝗘𝗥𝗦 📊 ✔️ AI Adoption Track tools, usage, and spend across teams & repos ✔️ AI Impact Follow AI-generated code from IDE → production ✔️ AI ROI Measure cost per PR, per shipped line, and token usage ✔️ AI Checkpoints Commit-level visibility: which agent, tokens used, AI contribution ✔️ Waydev Agent Ask questions and feed insights back into your workflows 𝗪𝗛𝗬 𝗧𝗛𝗜𝗦 𝗠𝗔𝗧𝗧𝗘𝗥𝗦 💡 Adopting AI was the easy part. Proving its real impact on production is the hard part. 𝗧𝗛𝗘 𝗕𝗜𝗚𝗚𝗘𝗥 𝗦𝗛𝗜𝗙𝗧 🔥 We’re moving from: “Using AI to write code” to Measuring how AI actually improves engineering output. If you’re building with AI, this is a layer you can’t ignore. 🔗 Explore Waydev: https://waydev.co/ 👉 Check it out on Product Hunt: https://lnkd.in/gUV-rSxa 💬 Curious how you’re measuring AI impact in your team? #AI #DevTools #Engineering #SoftwareDevelopment #Productivity #Startups #BuildInPublic #AITools #Tech
-
The best-performing software engineering teams measure both output and outcomes. Measuring only one often means underperforming in the other. While debates persist about which is more important, our research shows that measuring both is critical. Otherwise, you risk landing in Quadrant 2 (building the wrong things quickly) or Quadrant 3 (building the right things slowly and eventually getting outperformed by a competitor). As an organization grows and matures, this becomes even more critical. You can't rely on intuition, politics, or relationships—you need to stop "winging it" and start making data-driven decisions. How do you measure outcomes? Outcomes are the business results that come from building the right things. These can be measured using product feature prioritization frameworks. How do you measure output? Measuring output is challenging because traditional methods don’t accurately measure this: 1. Lines of Code: Encourages verbose or redundant code. 2. Number of Commits/PRs: Leads to artificially small commits or pull requests. 3. Story Points: Subjective and not comparable across teams; may inflate task estimates. 4. Surveys: Great for understanding team satisfaction but not for measuring output or productivity. 5. DORA Metrics: Measure DevOps performance, not productivity. Deployment sizes vary within & across teams, and these metrics can be easily gamed when used as productivity measures. Measuring how often you’re deploying is meaningless from a productivity perspective unless you’re also measuring _what_ is being deployed. We propose a different way of measuring software engineering output. Using an algorithmic model developed from research conducted at Stanford, we quantitatively assess software engineering productivity by evaluating the impact of commits on the software's functionality (ie. we measure output delivered). We connect to Git and quantify the impact of the source code in every commit. The algorithmic model generates a language-agnostic metric for evaluating & benchmarking individual developers, teams, and entire organizations. We're publishing several research papers on this, with the first pre-print released in September. Please leave a comment if you’d like to read it. Interested in leveraging this for your organization? Message me to learn more. #softwareengineering #softwaredevelopment #devops
Explore categories
- Hospitality & Tourism
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development