Conducting Post-Training Evaluations

Explore top LinkedIn content from expert professionals.

  • There are 1.1M credentials but our latest research finds that only 12% offer significant wage gain earners wouldn’t have otherwise gotten. The Burning Glass Institute is launching the Credential Value Index to show which ones work, evaluating the outcomes from 23,000 non-degree credentials from over 2,000 providers, including every certification in America—from Coursera digital marketing certificates to OSHA certifications. To see whether they actually deliver for workers, we analyzed how each changed the course of the careers of 7 million people who had earned them. While only 1 in 3 credentials meet a minimum threshold vs. counterfactual peers for either boosting wages, facilitating career changes, or moving people up within their field, we still found 8,000 credentials that really move the needle for workers—often in ways that are transformative. The top decile of credentials yields annual wage gains of nearly $5,000 vs. counterfactual peers, increases by 7x vs. bottom credentials the chances of switching jobs into an aligned career, and boosts by 17x the probability of an earner’s getting promoted within their current field. We found wide variances in outcomes even for the same credential across named providers–and across the portfolio of credential offerings of even high-reputation providers. That says that learners can’t just trust brands and they can’t just trust that a credential will help just because it’s in a high-paying field. Instead, they need real data to help them make informed decisions. Our goal in this work is practical: to put these evaluations in the hands of workers and learners, employers, education institutions & training providers, and policymakers. The Credential Value Index–available through our Navigator site available on https://lnkd.in/e_BTX9bs –makes all 23,000 evaluations accessible to the public, with easy-to-understand metrics of performance, comparisons with other credentials, and helpful context, like which roles earners find themselves working in, which employers they’re working for, and which skills they master along the way. Our research is summarized in an American Enterprise Institute working paper which I coauthored with AEI senior fellow Mark Schneider and Burning Glass Institute colleagues Shrinidhi Rao, Scott Spitze, and Debbie Wasden. You can find it on https://lnkd.in/ezynMA-v. I want to express my deep thanks to Ellie Bertani, Matt Zieger, and the GitLab Foundation for all they have done to support this initiative. I am grateful for your partnership. And a big thank you to Patti Constantakis and Sean Murphy at Walmart for the opportunity to test this framework in a real-world laboratory. Finally, the Credential Value Index builds on a close partnership with Jobs for the Future (JFF). Many thanks to Maria Flynn, Stephen Yadzinski, and their terrific team. #education #careers #highereducation #learning #skills

  • View profile for Catherine McDonald
    Catherine McDonald Catherine McDonald is an Influencer

    Organisational Behaviour, Leadership & Lean Coach | LinkedIn Top Voice ’24, ’25 & ’26 | Co-Host of Lean Solutions Podcast | Systemic Practitioner in Leadership & Change | Founder, MCD Consulting

    78,858 followers

    In a CULTURE of continuous feedback, people aren’t just "allowed" to give feedback; they’re actively encouraged to. It's where feedback isn’t reserved for formal reviews or the occasional meeting; it’s a natural part of daily work. A true CULTURE of continuous feedback means that: ✳️ People share ideas freely, knowing their thoughts are valued. ✳️ Teams regularly check in to discuss what’s going well and where things might need adjustment. ✳️ Leaders and managers seek feedback as much as they give it, showing that everyone’s input matters. ✳️ Constructive criticism is welcomed, and people see it as an opportunity to make things better, not as a judgment on them. If this all sounds very different to your existing culture- here's a few things you can try: ✔️ Set up Regular Check-Ins (Daily huddles, 1:1 coaching sessions and weekly meetings provide the necessary space for people to share their ideas, address challenges, and offer suggestions for improvement. ✔️ Create Feedback Channels: While direct feedback is a sign of a healthy feedback culture, there will always be people who don't like to speak up about how they feel so give people multiple ways to share feedback e.g. through suggestion boxes (physical or digital) or anonymous surveys. ✔️ Lead by Example: Simple- Ask for feedback on your own performance or decisions. If you struggle with this, you need a coach!! ✔️ Encourage Real-Time Feedback: Encourage people to give feedback in the moment rather than waiting for formal reviews or structured meetings. If someone spots an improvement opportunity during a task, they should feel free to speak up right then. ✔️ Recognize and Act on Feedback: Feedback culture only works if people see that their input leads to real change. Yesterday, we talked about recognizing the real experts—the people who do the work. In a feedback culture, this means actively listening to those insights and implementing changes based on what people who carry out the process are seeing and experiencing. They know better than anyone how things really work and where the bottlenecks lie. 💡 This culture isn't built overnight but it's entirely possible to build over time, once leaders are open to their own development and willing to make changes in their own behaviours first! #feedback #feedbackculture #leadership #continuousimprovement #lean #leanmanagement

  • View profile for Chris Schembra 🍝
    Chris Schembra 🍝 Chris Schembra 🍝 is an Influencer

    Rolling Stone & CNBC Columnist | #1 WSJ Bestselling Author | Keynote Speaker on Leadership, Belonging & Culture | Unlocking Human Potential in the Age of AI

    58,065 followers

    Most teams don’t get better because they don’t take time to debrief. Last year, I had the honor of doing a bunch of leadership development work alongside my dear friend and amigo, Michael French. He’s a multi-time founder with successful exits, a fantastic family, and a heart of gold. One of the most powerful tools we taught together (really he, Michael O'Brien, and Admiral Mike McCabe taught, and I amplified in my sessions) was the concept of a Topgun-style debrief — and then we practiced it ourselves after every single session as a group. It’s a simple but transformative ritual. After every experience, we’d ask each other: What went well? What could have gone better? And what actions will we take to be even better next time? That’s it. Just three questions. But when asked in a space of trust, it opens the door to continuous improvement, honest reflection, and shared learning. The coolest part? Michael started doing it at home with his son — and now his son comes home from school excited to debrief the day with his dad. That’s when you know the tool is working. The origins of this approach go back to the Navy Fighter Weapons School — better known as Topgun. In the 1960s, Navy pilots were underperforming in air combat. So they changed the way they trained. But more importantly, they changed the way they debriefed. They created a culture of constructive, positive, inclusive performance reviews — grounded in trust, openness, and the pursuit of excellence. Led to a 400% improvement in pilot effectiveness. The philosophy was clear: the debrief is not about blame or fault-finding. It’s not about who “won” the debrief. It’s about learning. It’s about getting better — together. The tone is collaborative, supportive, and often informal. The goal is to build a culture of reflection where people feel safe enough to speak, to listen, and to grow. Most organizations only do debriefs when something goes wrong. But if we wait for failure to reflect, we miss all the micro-moments that help us move from good to great. Excellence isn’t a destination. It’s a mindset. It’s the discipline of always being open to improvement — even when things are going well. Especially when things are going well. So here’s my nudge to you: give this a try. Whether it’s with your team, your family, your partner, or just yourself at the end of the day — ask those three simple questions. What went well? What could have gone better? And what actions can we take to be even better next time? Let me know if you do. I’d love to hear how it goes.

  • View profile for Ajit Sivaram
    Ajit Sivaram Ajit Sivaram is an Influencer

    Co-founder @ U&I | Building Scalable CSR & Volunteering Partnerships with 100+ Companies Co-founder @ Change+ | Leadership Transformation for Senior Teams & Culture-Driven Companies

    34,105 followers

    We've built a culture that fears feedback more than failure. It's bizarre when you think about it. Data shows continuous feedback improves team performance by 12% and lowers attrition by 15%. Yet we treat feedback like a rare disease - something to be diagnosed only in annual reviews, behind closed doors, with hushed voices. Why? Because in India, feedback isn't just information. It's emotional warfare. Our hierarchical DNA has programmed us to see feedback as judgment. As attack on izzat. As proof that we're not enough. So we avoid it. We sugarcoat it. We bury it under vague phrases and polite nods. "Let's circle back later" is corporate code for "I'd rather eat glass than tell you what I really think." The irony cuts deep. We're a culture that will gossip about someone's work for hours but won't spend five minutes telling them directly how to improve it. This isn't just a professional problem. It's a cultural inheritance. We were raised in homes where feedback came disguised as comparison. "Sharma ji's son got 95%." We studied in schools where feedback was public humiliation. We entered workplaces where feedback was reserved for those with power, not those with potential. No wonder we're terrified of it. But here's what's changing. Companies are building peer coaching models that transform feedback from judgment to journey. From criticism to collaboration. They're creating spaces where feedback isn't just about pointing out what's wrong but imagining what's possible. The secret? They've realized feedback isn't about courage. It's about care. When you genuinely care about someone's growth, feedback stops being awkward and starts being essential. It becomes less about your discomfort and more about their development. Less about preserving harmony and more about pursuing excellence. But this requires what I call "managerial courage" - the willingness to be temporarily disliked for someone else's long-term growth. The ability to choose truth over comfort. Progress over peace. Because the most dangerous feedback isn't the harsh one. It's the one never given. Think about it. Every time you withhold feedback, you're essentially saying: "I don't believe you can handle the truth. I don't think you're capable of growth. I don't trust our relationship enough to make it uncomfortable." Is that really the message you want to send? The teams that will thrive aren't the ones with the most talent or resources. They're the ones that have normalized feedback. That have made it as routine as breathing. That have understood feedback isn't personal - it's professional oxygen. So start small. Give one honest piece of feedback today. Receive one without defending. Build the muscle. Because a culture of feedback isn't built just in workshops. It's built in whispers of truth that gradually become conversations of growth. If you'd like to build a culture of feedback in your teams or orgs, DM me - would love to chat.

  • View profile for Vishakha Mittal

    Senior Manager Talent Development, HR @ UHG

    5,640 followers

    Measuring the ROI of Virtual Behavioral Training Investing in behavioral training is not just about cost—it’s about measurable impact. The real question organizations must ask is: Does the training deliver a return on investment (ROI) in terms of improved retention, productivity, and leadership effectiveness? In our previous analysis, the total cost of a two-day virtual behavioral training for 60 mid-level managers was ₹19,63,000. Now, let’s calculate the potential ROI based on key business outcomes. 1. ROI Formula The standard formula for training ROI: ROI (%) = {Monetary Benefits} - {Training Cost}/ {Training Cost} * 100 2. Business Impact Assumptions To estimate the monetary benefits, we consider three key areas: A) Reduction in Attrition Average attrition for mid-level managers: 15% annually Assumed reduction in attrition due to training: 3 percentage points Average cost of replacing a manager (hiring, onboarding, productivity loss): ₹15,00,000 per manager Retention improvement: 60 managers × 3% = 1.8 managers saved {Cost Savings from Reduced Attrition} = 1.8*15,00,000 = ₹27,00,000 B) Increased Promotions & Internal Mobility Assumed impact: 5% increase in internal promotions Cost of hiring an external manager: ₹20,00,000 (recruitment, ramp-up, lost productivity) Savings from internal promotion: 60 × 5% = 3 managers promoted {Cost Savings from Internal Promotions} = 3* 20,00,000 = ₹60,00,000 C) Productivity Gains from Behavioral Improvement Behavioral training enhances leadership, communication, and decision-making, leading to improved productivity. Assumed productivity increase: 2% per manager Average annual contribution per manager (₹30L salary, assuming 3× salary as productivity value): ₹90,00,000 Total productivity gain per manager: ₹90,00,000 × 2% = ₹1,80,000 Total impact: ₹1,80,000 × 60 managers = ₹1,08,00,000 3. Total Monetary Benefit Benefit Area and Financial Impact Reduction in Attrition 27,00,000 Increased Internal Promotions 60,00,000 Productivity Gains 1,08,00,000 Total Benefits 1,95,00,000 4. ROI Calculation ROI (%) = {1,95,00,000 - 19,63,000}/{19,63,000} * 100 ROI = {1,75,37,000}/{19,63,000} * 100 ROI = 892% 5. Strategic Takeaways: Why This Matters High ROI Justifies Investment: An 892% ROI confirms that investing in behavioral training yields substantial business value. Retention and Internal Mobility Drive Cost Savings: Avoiding attrition and promoting from within reduces hiring costs significantly. Productivity Gains Create Long-Term Impact: Even small behavioral shifts in leadership and decision-making lead to tangible business outcomes. By linking training costs to measurable business benefits, organizations can move beyond cost discussions to strategic impact measurement—ensuring learning investments drive organizational growth. Would love to hear from others.

  • View profile for Ruth Gotian, Ed.D., M.S.
    Ruth Gotian, Ed.D., M.S. Ruth Gotian, Ed.D., M.S. is an Influencer

    I Help High Achievers Reach the Next Level 🚀 | Success Scholar 📚 | 🎤 Keynote Speaker & Executive Coach | Fmr CLO, Weill Cornell Medicine | Trusted by Nobel Prize winners 🏅, Astronauts 🚀 & NBA Champions 🏀

    36,873 followers

    📈 Unlocking the True Impact of L&D: Beyond Engagement Metrics 🚀 I am honored to once again be asked by the LinkedIn Talent Blog to weigh in on this important question. To truly measure the impact of learning and development (L&D), we need to go beyond traditional engagement metrics and look at tangible business outcomes. 🌟 Internal Mobility: Track how many employees advance to new roles or get promoted after participating in L&D programs. This shows that our initiatives are effectively preparing talent for future leadership. 📚 Upskilling in Action: Evaluate performance reviews, project outcomes, and the speed at which employees integrate their new knowledge into their work. Practical application is a strong indicator of training’s effectiveness. 🔄 Retention Rates: Compare retention between employees who engage in L&D and those who don’t. A higher retention rate among L&D participants suggests our programs are enhancing job satisfaction and loyalty. 💼 Business Performance: Link L&D to specific business performance indicators like sales growth, customer satisfaction, and innovation rates. Demonstrating a connection between employee development and these outcomes shows the direct value L&D brings to the organization. By focusing on these metrics, we can provide a comprehensive view of how L&D drives business success beyond just engagement. 🌟 🔗 Link to the blog along with insights from other incredible L&D thought leaders (list of thought leaders below): https://lnkd.in/efne_USa What other innovative ways have you found effective in measuring the impact of L&D in your organization? Share your thoughts below! 👇 Laura Hilgers Naphtali Bryant, M.A. Lori Niles-Hofmann Terri Horton, EdD, MBA, MA, SHRM-CP, PHR Christopher Lind

  • View profile for Meeta Kanhere

    Leadership Muscle Coach | Firefighting to Future-Focused | Leadership Muscle System™ | Author- Build Your Leadership Muscle

    5,103 followers

    ❗ Only 12% of employees apply new skills learned in L&D programs to their jobs (HBR).  ❗ Are you confident that your Learning and Development initiatives are part of that 12%? And do you have the data to back it up?  ❗ L&D professionals who can track the business results of their programs report having a higher satisfaction with their services, more executive support and continued and increased resources for L&D investments.    Learning is always specific to each employee and requires personal context. Evaluating training effectiveness shows you how useful your current training offerings are and how you can improve them in the future. What’s more, effective training leads to higher employee performance and satisfaction, boosts team morale, and increases your return on investment (ROI). As a business, you’re investing valuable resources in your training programs, so it’s imperative that you regularly identify what’s working, what’s not, why, and how to keep improving. To identify the Right Employee Training Metrics for Your Training Program, here are a few important pointers: ✅ Consult with key stakeholders – before development, on the metrics they care about. Make sure to use your L&D expertise to inform your collaboration. ✅Avoid using L&D jargon when collaborating with stakeholders – Modify your language to suit the audience. ✅Determine the value of measuring the effectiveness of a training program. It takes effort to evaluate training effectiveness, and those that support key strategic outcomes should be the focus of your training metrics. ✅Avoid highlighting low-level metrics, such as enrollment and completion rates. 9 Examples of Commonly Used Training Metrics and L&D Metrics 📌 Completion Rates: The percentage of employees who successfully complete the training program. 📌Knowledge Retention: Measured through pre- and post-training assessments to evaluate how much information participants have retained. 📌Skill Improvement: Assessed through practical tests or simulations to determine how effectively the training has improved specific skills. 📌Behavioral Changes: Observing changes in employee behavior in the workplace that can be attributed to the training. 📌Employee Engagement: Employee feedback and surveys post-training to assess their engagement and satisfaction with the training. 📌Return on Investment (ROI): Calculating the financial return on investment from the training, considering costs vs. benefits. 📌Application of Skills: Evaluating how effectively employees are applying new skills or knowledge in their day-to-day work. 📌Training Cost per Employee: Calculating the total cost of training per participant. 📌Employee Turnover Rates: Assessing whether the training has an impact on employee retention and turnover rates. Let's discuss in comments which training metrics are you using and your experience of using it. #MeetaMeraki #Trainingeffectiveness

  • View profile for Anurag(Anu) Karuparti

    Agentic AI Strategist @Microsoft (30k+) | Author - Generative AI for Cloud Solutions | LinkedIn Learning Instructor | Responsible AI Advisor | Ex-PwC, EY | Marathon Runner

    31,501 followers

    As we scale GenAI from demos to real-world deployment, one thing becomes clear: 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 𝗱𝗮𝘁𝗮𝘀𝗲𝘁𝘀 𝗰𝗮𝗻 𝗺𝗮𝗸𝗲 𝗼𝗿 𝗯𝗿𝗲𝗮𝗸 𝗮 𝗚𝗲𝗻𝗔𝗜 𝘀𝘆𝘀𝘁𝗲𝗺. A model can be trained on massive amounts of data, but that doesn’t guarantee it understands context, nuance, or intent at inference time. You can teach a student all the textbook theory in the world. But unless you ask the right questions, in the right setting, under realistic pressure, you’ll never know what they truly grasp. This snapshot outlines the 6 dataset types that AI teams use to rigorously evaluate systems at every stage of maturity: The Evaluation Spectrum 1. 𝐐𝐮𝐚𝐥𝐢𝐟𝐢𝐞𝐝 𝐚𝐧𝐬𝐰𝐞𝐫𝐬 Meaning: Expert-reviewed responses Use: Measure answer quality (groundedness, coherence, etc.) Goal: High-quality, human-like responses 2. 𝐒𝐲𝐧𝐭𝐡𝐞𝐭𝐢𝐜 Meaning: AI-generated questions and answers Use: Test scale and performance Goal: Maximize response accuracy, retrieval quality, and tool use precision 3. 𝐀𝐝𝐯𝐞𝐫𝐬𝐚𝐫𝐢𝐚𝐥 Meaning: Malicious or risky prompts (e.g., jailbreaks) Use: Ensure safety and resilience Goal: Avoid unsafe outputs 4. 𝐎𝐎𝐃 (𝐎𝐮𝐭 𝐨𝐟 𝐃𝐨𝐦𝐚𝐢𝐧) Meaning: Unusual or irrelevant topics Use: See how well the model handles unfamiliar territory Goal: Avoid giving irrelevant or misleading answers 5. 𝐓𝐡𝐮𝐦𝐛𝐬 𝐝𝐨𝐰𝐧 Meaning: Real examples where users rated answers poorly Use: Identify failure modes Goal: Internal review, error analysis 6. 𝐏𝐑𝐎𝐃 Meaning: Cleaned, real user queries from deployed systems Use: Evaluate live performance Goal: Ensure production response quality This layered approach is essential for building: • Trustworthy AI • Measurable safety • Meaningful user experience Most organizations still rely on "accuracy-only" testing. But GenAI in production demands multi-dimensional evaluation — spanning risk, relevance, and realism. If you’re deploying GenAI at scale, ask: Are you testing the right things with the right datasets? Let’s sharpen the tools we use to measure intelligence. Because better testing = better AI. 👇 Would love to hear how you’re designing your eval pipelines. #genai #evaluation #llmops #promptengineering #aiarchitecture #openai

  • View profile for Marcus Chan
    Marcus Chan Marcus Chan is an Influencer

    Missing your number and not sure why? I’ve been in that seat. Ex‑Fortune 500 $195M/yr sales leader helping CROs & VPs of Sales diagnose, find & fix revenue leaks. $950M+ client revenue | WSJ bestselling author

    101,090 followers

    "We brought in a trainer for two days and nothing changed." Of course it didn't. You treated training like a checkbox activity. Sales leaders constantly make this mistake: → Hire external trainer for 2-day workshop → Everyone gets excited during sessions → 30 days later, zero behavior change → "Training doesn't work" Wrong. Your approach to training doesn't work. Here's what actually happens: Day 1: Reps are pumped. Taking notes. Asking questions. Day 2: Still engaged. Ready to implement everything. Day 30: Back to old habits. Zero retention. Why? Because you treated symptoms, not the disease. You didn't change their daily habits. You didn't provide ongoing reinforcement. You didn't build systems for accountability. Real training that creates lasting change looks different: #1 It's diagnostic first. Before any training, you identify specific skill gaps through call reviews, deal analysis, and performance data. Not generic "they need better discovery" but specific "they ask surface level pain questions but never uncover business impact." #2 It's delivered in sprints. Six weeks of twice-weekly sessions beats a 2-day workshop every time. Reps can practice between sessions, get feedback, and build muscle memory. #3 It includes reinforcement systems. Weekly coaching calls, peer practice sessions, and manager check-ins. The learning doesn't stop when the trainer leaves. #4 It measures behavior change, not satisfaction scores. "Did you like the training?" is worthless. "Are you now asking better discovery questions?" matters. #5 It provides job aids and frameworks. Reps need cheat sheets, email templates, and conversation guides they can reference in real situations. Most importantly: It's customized to your specific challenges, not generic sales advice. The companies that see 40%+ improvement in performance don't do one-off training events. They build learning into their culture. They have weekly skill-building sessions. They do call reviews with specific feedback. They practice objection handling until it's automatic. Stop buying training like it's a magic pill. Start building capability like it's a muscle that needs consistent exercise. Your reps deserve better than motivational speeches that wear off in a week. — Tired of wasted training budgets? I'll design a performance improvement system that actually creates lasting behavior change. Book a diagnostic: https://lnkd.in/ghh8VCaf

  • View profile for Apoorva N

    AI- Driven Global Learning & Development Leader || HRAI 30 Under 30 Winner 2024 & 2025 || Dale Carnegie Certified Facilitator|| Building Learning Solutions

    10,017 followers

    𝐓𝐡𝐞 𝐒𝐞𝐜𝐫𝐞𝐭 𝐭𝐨 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐓𝐡𝐚𝐭 𝐀𝐜𝐭𝐮𝐚𝐥𝐥𝐲 𝐖𝐨𝐫𝐤𝐬? 𝐒𝐭𝐚𝐫𝐭 𝐚𝐭 𝐭𝐡𝐞 𝐄𝐧𝐝. 🏁 I used to think my job as an L&D professional started with a syllabus. I was wrong. Recently, I was tasked with building a learning solution for our Talent Acquisition (TA) team. The goal wasn’t just to "train recruiters"—it was to solve a business problem. Instead of looking at what they needed to know (Level 2), I started with what the business needed to achieve (Kirkpatrick Level 4). The "Reverse" Approach I didn’t start with slides. I started by analyzing Voice of the Customer (VOC) survey results, focusing on various metrics from both Hiring Managers and Candidates. Working Backwards: ✅ Level 4 (Results): I defined the business KPI. ✅ Level 3 (Behavior): Based on the VOC metrics, I identified the specific actions recruiters needed to change—specifically around "Precision Intake" and "Candidate Experience Management." ✅ Level 2 & 1 (Learning & Reaction): Only then did I design the actual training content that addressed those specific behavior gaps. The Result? The training didn't feel like a chore; it felt like a solution. Because I built it based on the actual metrics revealed in the VOC surveys, the TA team saw immediate value, and the business saw a measurable shift in hiring efficiency. The Lesson: If you want your learning solutions to be more than just "check-the-box" exercises, stop asking "What should we teach?" and start asking "What does the data say I need to solve?" How do you use VOC data to shape your enablement programs? 👇 #LearningAndDevelopment #InstructionalDesign #TalentAcquisition #KirkpatrickModel #Enablement #DataDrivenLD #BusinessImpact

Explore categories