Creating Surveys That Get Real Training Feedback

Explore top LinkedIn content from expert professionals.

Summary

Creating surveys that get real training feedback means designing questions that help you understand how well your training actually works, not just how much people enjoyed it. Instead of relying on generic forms, focus on gathering meaningful input that leads to real improvements in learning and business outcomes.

  • Ask targeted questions: Include specific prompts about course content, teaching style, and skill application to gain insights that help you improve future training sessions.
  • Show real impact: Share results and follow up by highlighting changes made based on previous feedback so participants know their input matters.
  • Track progress over time: Use surveys at different stages—before, during, and after training—to monitor knowledge gains, behavior changes, and business results.
Summarized by AI based on LinkedIn member posts
  • View profile for Luke Hobson, EdD

    Assistant Director of Instructional Design at MIT | Author | Podcaster | Instructor | Public Speaker

    33,977 followers

    When I first started teaching online back in 2017, the course evaluation process bothered me. Initially, I was excited to get feedback from my students about their learning experience. Then I saw the survey questions. Even though there were about 15 of them, none actually helped me improve the course. They were all extremely generic and left me scratching my head, unsure of what to do with the information. It’s not like I could ask follow-up questions or suggest improvements to the survey itself. Understandably, the institution used these evaluations for its own data points, and there wasn’t much chance of me influencing that process. So, I decided to take a different approach. What if I created my own informal course evaluations that were completely optional? In this survey, I could ask course-specific and teaching-style questions to figure out how to improve the course before the next run started. After several revisions, I came up with these questions: - Overall course rating (1–5 stars) - What was your favorite part (if any) of this course? - What did you find the least helpful (if any) during this course? - Please rate the relevancy of the learning materials (readings and videos) to your academic journey, career, or instructional design journey. (1 = not relevant at all, 10 = extremely relevant) - Please rate the relevancy of the learning activities and assessments to your academic journey, career, or instructional design journey. (1 = not relevant at all, 10 = extremely relevant) - Did you find my teaching style and feedback helpful for your assignments? - What suggestions do you have for improving the course (if any)? - Are there any other comments you'd like to share with me? I was—and still am—pleasantly surprised at how many students complete both the optional course survey and the official one. If you're looking for more meaningful feedback about your courses, I recommend giving this a try! This process has really helped me improve my learning experiences over time.

  • View profile for Pedro Ventura

    People Development & Performance with a touch of technology

    3,301 followers

    [EN] Ways to measure the success of your training and prove impact: In years of experience in the L&D market and being an L&D mentor at the L&D SHAKERS community, I’ve faced different situations where mentees come to me for help in thinking about how to “prove that their training programs really work.” Today, I decided to share a little about it and maybe help more L&D fellows. ♥️ The truth is that there isn't a simple answer (like everything in life haha), but there are some ways to do it better. Below are some insights (hope they help you): 👉 Planning the measurement method is part of training design Often forgotten but always necessary. Your L&D solution isn’t ready to launch if you don’t know which indicators, success metrics, and measurement methods you’ll use. Believe me: if you only think about it in the middle of the training program or session, it will be harder to prove success and results! 👉Tracking the experience: NPS/CSAT/Feedback survey The top method and easiest way to measure. There are no tricks here. Make sure all participants receive an NPS/CSAT/Feedback survey at the end of the session. The recommendation is: be simple! Ask 1 to 3 closed questions plus an optional comment box. The number of responses is highly dependent on the size of the form. Fewer questions = More answers = More data reliability 👉Tracking behavior change: Assessments My fav, especially in the case of training programs (not too good for workshops or one-session training). The idea is to track some participants’ behaviors before the training and sometime after the training (weeks to months) using a survey where you can identify how learners act and think about certain issues. In general, “Likert-type” questions are the best. The difference between the first assessment and the final one will help you understand where your training program helps. An extra option is to use a 180º or 360º assessment, where coworkers, stakeholders, and/or direct reports receive the same assessment to answer about the participant’s behaviors. 👉Tracking incidental results: Business changes Let’s face it: proving that a training program changes a business metric is a dream but hard to achieve. But with time and attention, in some cases (especially in more technical training), this can be possible. To make it happen, try to map all the impacts that your training program can cause. Which aspects of leadership can change? How has the client's NPS improved after the CSM team completed the training program? After mapping the possible impacts, call on some coworkers from the business/growth/data areas to help you track these results during the training period. So, there are countless ways to explore this topic, and this really deserves an exploratory article (soon). But until then, I’d recommend you check out some experts in HR data and training measurement: 📌 Dr. Alaina Szlachta 📌 David Green 🇺🇦 📌 Kirkpatrick Partners, LLC

  • View profile for Jarrod Harden

    Belonging Coach & Team Engagement Expert | Dynamic Workshop Facilitator “Bringing joy, listening and connection back to the work"

    3,655 followers

    Stop calling it Survey Fatigue. It’s probably “Nothing Changes Anyway” Fatigue. If you want people to keep sharing what they think and feel, you have to earn it. Show them you’re listening and that it matters. Here’s how to do it right… 1. “Where are the Receipts???” Before launching a new survey, show what you did with the last one. Remind employees what they shared and how it led to real change. Even small wins matter here. This is where trust begins. 2. Respect Their Time Run the survey with clear communication and thoughtful outreach. Give people a reason to care while acknowledging the time it takes. Celebrate your early responders and follow up with the rest respectfully, even those last-minute stragglers… 3. Don’t Sit on the Results Your people already know what’s working and what isn’t because they told you. Give a high-level overview of what came up. They don’t need every detail, but enough to know you’re paying attention. 4. Time for Action Pick a few key areas and plan what you’ll do… then actually do it. Planning is part of action, but it can’t be where it stops. Keep people updated on what’s happening and what’s next. Show progress, even if it’s just the first steps. “Nothing Changes Anyway” Fatigue is REAL If your survey process ends with “thanks for your feedback,” you’re doing it wrong. A good survey cycle proves you’re listening and acting. That’s how you earn trust, every time.

  • View profile for Liz C.

    CEO | MBA | Medical Education | Physician and Sales Training Expert | Athlete | Wife | Mom

    6,848 followers

    Smile Sheets: The Illusion of Training Effectiveness. If you're investing ~$200K per employee to ramp them up, do you really want to measure training effectiveness based on whether they liked the snacks? 🤨 Traditional post-training surveys—AKA "Smile Sheets"—are great for checking if the room was the right temperature but do little to tell us if knowledge was actually transferred or if behaviors will change. Sure, logistics and experience matter, but as a leader, what I really want to know is: ✅ Did they retain the knowledge? ✅ Can they apply the skills in real-world scenarios? ✅ Will this training drive better business outcomes? That’s why I’ve changed the way I gather training feedback. Instead of a one-and-done survey, I use quantitative and qualitative assessments at multiple intervals: 📌 Before training to gauge baseline knowledge 📌 Midway through for real-time adjustments 📌 Immediately post-training for immediate insights 📌 Strategic follow-ups tied to actual product usage & skill application But the real game-changer? Hard data. I track real-world outcomes like product adoption, quota achievement, adverse events, and speed to competency. The right metrics vary by company, but one thing remains the same: Smile Sheets alone don’t cut it. So, if you’re still relying on traditional post-training surveys to measure effectiveness, it’s time to rethink your approach. How are you measuring training success in your organization? Let’s compare notes. 👇 #MedDevice #TrainingEffectiveness #Leadership #VentureCapital

  • View profile for Kevin Lau

    I help customer marketers prove their value | VP Customer Marketing @ Freshworks | ex-F5, Adobe, Marketo, Google

    14,764 followers

    You Don’t Need Another Survey Tool. 𝗬𝗼𝘂 𝗡𝗲𝗲𝗱 𝗮 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝘆. Surveys are everywhere. But most teams are doing them wrong. A CSAT here. An NPS there. A last-minute ask from Product. 𝗦𝘂𝗿𝘃𝗲𝘆𝘀 𝗵𝗮𝘃𝗲 𝗯𝗲𝗰𝗼𝗺𝗲 𝗻𝗼𝗶𝘀𝗲 — 𝗻𝗼𝘁 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝘆. And that’s the missed opportunity. 👉 When done right, surveys are one of the most strategic tools to support the entire customer lifecycle. From insight → to alignment → to activation at scale. Here’s what the best teams do differently: 🔹 𝗢𝗻𝗯𝗼𝗮𝗿𝗱𝗶𝗻𝗴 𝗰𝗵𝗲𝗰𝗸𝗽𝗼𝗶𝗻𝘁𝘀 Spot friction and confidence gaps before they slow down time-to-value. → Align with CS to fix onboarding experience fast. 🔹 𝗣𝗼𝘀𝘁-𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗽𝘂𝗹𝘀𝗲𝘀 See if enablement actually lands — by role or use case. → Adjust training and reinforce what matters most. 🔹 𝗤𝘂𝗮𝗿𝘁𝗲𝗿𝗹𝘆 𝗳𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝗹𝗼𝗼𝗽𝘀 Trigger comms plays, build success plans, and spotlight customer wins. → Give Marketing, CS, and Product a shared signal set. If you're not sure where to start, UserEvidence recently created a customer surveying playbook that highlights some of these ideas for always-on surveys –– see link in comments. A great survey doesn’t just collect data. It drives momentum across the journey: → Source stories for Advocacy → Launch adoption and retention campaigns → Tailor communications by segment and stage → Flag risk and friction before escalation This is how feedback becomes fuel — not just a form. Want to level up? ✔ Ask fewer, smarter questions — 3 to 5 is enough ✔ Follow up within 7 days — internally and with customers ✔ Use AI to: • Tag responses by journey stage • Detect signals (story potential, risk, friction) • Summarize themes by persona and use case The uncomfortable truth? You don’t need more feedback. 𝗬𝗼𝘂 𝗻𝗲𝗲𝗱 𝗮 𝘀𝘆𝘀𝘁𝗲𝗺 𝘁𝗼 𝗮𝗰𝘁 𝗼𝗻 𝗶𝘁. You don’t need more tools. 𝗬𝗼𝘂 𝗻𝗲𝗲𝗱 𝗯𝗲𝘁𝘁𝗲𝗿 𝘁𝗶𝗺𝗶𝗻𝗴 — 𝗮𝗻𝗱 𝗯𝗲𝘁𝘁𝗲𝗿 𝗮𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁. Surveys aren’t a task. They’re your growth engine. What’s a survey signal your team is sleeping on?

Explore categories