Remember that bad survey you wrote? The one that resulted in responses filled with blatant bias and caused you to doubt whether your respondents even understood the questions? Creating a survey may seem like a simple task, but even minor errors can result in biased results and unreliable data. If this has happened to you before, it's likely due to one or more of these common mistakes in your survey design: 1. Ambiguous Questions: Vague wording like “often” or “regularly” leads to varied interpretations among respondents. Be specific—use clear options like “daily,” “weekly,” or “monthly” to ensure consistent and accurate responses. 2. Double-Barreled Questions: Combining two questions into one, such as “Do you find our website attractive and easy to navigate?” can confuse respondents and lead to unclear answers. Break these into separate questions to get precise, actionable feedback. 3. Leading/Loaded Questions: Questions that push respondents toward a specific answer, like “Do you agree that responsible citizens should support local businesses?” can introduce bias. Keep your questions neutral to gather unbiased, genuine opinions. 4. Assumptions: Assuming respondents have certain knowledge or opinions can skew results. For example, “Are you in favor of a balanced budget?” assumes understanding of its implications. Provide necessary context to ensure respondents fully grasp the question. 5. Burdensome Questions: Asking complex or detail-heavy questions, such as “How many times have you dined out in the last six months?” can overwhelm respondents and lead to inaccurate answers. Simplify these questions or offer multiple-choice options to make them easier to answer. 6. Handling Sensitive Topics: Sensitive questions, like those about personal habits or finances, need to be phrased carefully to avoid discomfort. Use neutral language, provide options to skip or anonymize answers, or employ tactics like Randomized Response Survey (RRS) to encourage honest, accurate responses. By being aware of and avoiding these potential mistakes, you can create surveys that produce precise, dependable, and useful information. Art+Science Analytics Institute | University of Notre Dame | University of Notre Dame - Mendoza College of Business | University of Illinois Urbana-Champaign | University of Chicago | D'Amore-McKim School of Business at Northeastern University | ELVTR | Grow with Google - Data Analytics #Analytics #DataStorytelling
Training Feedback Mechanisms
Explore top LinkedIn content from expert professionals.
-
-
My workshop feedback method has a 100% response rate — and uses zero forms. I ditched post-workshop surveys because… no one filled them out and the ones who did wrote things like “Great workshop 🤗 ” (helpful… ish ⁉️ ). So now I use my four-question, four-colour sticky-note system at the closing of a workshop. It’s fast, visual, and human. It surfaces real language, real commitments, and real insight. Reflection becomes baked into the workshop instead of bolted on. Here’s the magic. I ask everyone to respond to these phrases individually 🟡 “I learned / liked / aha!” - Quick bursts of insight. One idea per sticky. No faffing. 🟢 “I will…” (What ideas do you plan to implement immediately?) - The gold. Actual commitments. I can instantly see what’s going to live beyond the room. 🔴 “I wish…” (What support do you need or what else do you wish we had explored today?) - Constructive, honest improvement ideas and what they need to succeed post-workshop. Better than any anonymous text box. 🔵 One word (What single word best describes your overall reaction to the session?) - These become my word cloud*, and it tells me the emotional temperature in one glance. Then, in small groups, participants choose their top insights, star them, and share them with the room. It turns into this joyful moment where you can see what activities really landed and what learning truly stuck. Impact? • I can literally see what resonated. • The “I will…” notes show behaviour change starting before people even leave the room. • The “I wish…” notes help me evolve each workshop immediately. • And the one-word cloud gives me a pulse check that’s surprisingly accurate. (see word cloud from 10 workshops* - 210 words - in comments) Yes, I still type them all into a spreadsheet by hand (there’s something human and connective about reading people’s handwriting). Then I let AI help me spot themes and patterns. It’s simple. It’s human. It works. And gives clients tangible, meaningful insights... Curious: how do you gather feedback that actually helps you get better? #PlayMore #JudgeLess #feedback #facilitation
-
No one really likes filling out surveys after a workshop. STOP and do these three things instead. Let’s be honest: You feel awkward giving out surveys. Participants want to get on with their lives. And everyone groans a little inside. Don't get me wrong. Surveys are valuable tools. Any self-respecting consultant, trainer, or facilitator will want to know. It helps us demonstrate value. And provides feedback to help us grow and improve. Here are 3 alternative ways to collect data 1️⃣Set clear, active objectives Write objectives using verbs. This let's you SEE and HEAR success. Avoid vague objectives like “to understand” Share at the start and track throughout. 2️⃣ Debriefs Structure the debrief Give participants space to process. Get the learning. Include accountability and ownership. 3️⃣ Use a feedback wall Set up a (physical or virtual) with 2 prompts: - “What I appreciated about this session...” - “What would have made it better for me...” Encourage participants to leave notes as they leave. ... These techniques not only give you valuable feedback but also create a more reflective and engaging experience. It doesn't measure long-term impact But it gives you proxy indicators. Because transformation happens when clarity meets action. Design with intention, deliver with purpose. And if you're self-employed, it's a game changer in credibility that will have your clients want you to come back for more. ~~ ♻️ Share if you think more facilitators should consider these. ✍️ Have you used any of these strategies before?
-
Training games are worthless because of a lack of this one element in most of them An effective debriefing In the past few months, I have researched quite a lot about training games on - Team Building - Leadership - Communication, etc. While every resource available out there on the net had the execution seamlessly shown, Most of them failed to show the debriefing of these games This is where real-time learning takes place. If your session has the usual balloon, rope and ball game without any relevance to the training topic, You will never be able to create the right impact. This is where trainers must focus on a few techniques to make the debriefing impactful. Let’s decode them 1️⃣ Plus Minus Interesting: After that fun-filled game, focus on extracting the following • What was positive? • What was negative? • What was interesting? 2️⃣ The Rose, Bud, Thorn: In this technique, use your observations from the last game to help the participants focus on the following: • A rose - What was done well? • A bud - What could have been better? • A thorn - What were the challenges faced while performing the act? 3️⃣ Tribal Council : In this technique, gather all the members in a circle. Ask a few of them to share their biggest ‘aha moment’. Make a note of their observations and relate the relevant points to the training topic 4️⃣ The Sharpshooter: In this technique, ask for 2 key takeaways from 3-4 people. Pick 1 keyword from their response. Ask mindful questions around that keyword to get further responses from those who did not share Debriefing isn't just about having a fun discussion. It's also about extracting maximum learning and growth from every experience. Which of these techniques did you find interesting? Let me know in the comments section #training #debriefing #learninganddevelopment #corporatetraining #personaldevelopment
-
Engagement survey results are in. Nobody's celebrating. Picture this: → 150 questions about everything from career development to office temperature. → Mandatory participation. → Results that disappear into a management black hole for six months. Then a company-wide email promising "action plans" that never materialise. Meanwhile, employees are thinking: "I told you the workload was unsustainable 8 months ago. You did nothing. Why should I bother again?" Annual engagement surveys treat employee sentiment like a yearly health check-up, gather data once, ignore it for twelve months, then act surprised when problems have gotten worse. The surveys that actually work are shorter, more frequent, and tied to immediate action. Pulse surveys that focus on specific, changeable factors rather than abstract "satisfaction" ratings. Most importantly, they close the feedback loop. When employees raise concerns about workload, they see management response within weeks, not next year's survey cycle. The best engagement measurement feels like ongoing conversation rather than annual interrogation. For HR teams, this means engagement data that actually drives positive change rather than just satisfying leadership's need for metrics. When employees see their feedback leading to real improvements, they stay engaged with the process instead of checking out mentally.
-
The 4 Most Effective Feedback Models Yesterday I did a virtual keynote with a Middle Eastern governmental organisation on effective feedback. Feedback is essential to trust and connection. Done well it can strengthen connections further. Here is some of what I shared that you may find useful. 1. SBI + EBI Model (Situation–Behavior–Impact–Even Better If) • Situation: Describe when and where the behavior occurred. “In yesterday’s client call…” • Behavior: Describe exactly what the person did. “…you took the lead on explaining our new proposal.” • Impact: Explain the result or effect. “The client seemed more confident about our expertise.” • Even Better If: Offer a constructive suggestion for improvement. “It would be even better if you paused to invite questions earlier, to boost engagement.” 2. BOOST + EBI Model (Balanced–Observed–Objective–Specific–Timely–Even Better If) • Balanced: Acknowledge both positives and areas for growth. • Observed: Refer to things you personally witnessed. • Objective: Remove personal bias. • Specific: Provide concrete examples. • Timely: Deliver feedback soon after the event. • Even Better If: Conclude with one actionable recommendation. “Your presentation was well-paced. It would be even better if you used fewer slides to keep attention high.” 3. COIN + EBI Model (Context–Observation–Impact–Next Steps–Even Better If) • Context: Set the scene for when/where. • Observation: Describe specific behavior. • Impact: Share the effect on results, people, or outcomes. • Next Steps: Co-create solutions together. • Even Better If: Add a stretch goal or aspirational suggestion. “Your report was clear and data-driven. It would be even better if you added a short executive summary for quick reference.” 4. Radical Candor + EBI (Care Personally–Challenge Directly–Even Better If) • Care Personally: Show genuine respect and support. • Challenge Directly: Be honest and clear about what needs improvement. • Even Better If: Offer a suggestion that supports growth and mutual trust. “I know you’re deeply committed to excellence. It would be even better if you delegated more so the team can learn from you.” I hope this helps, do share it with anyone having to dole out feedback this time of year. Just one more speaking engagement to go to round out the year! Simone Heng #author #loneliness #humanconnection #keynotespeaker
-
This week's theme in my workshops (and, by that extension, my posts to you here) is – assessing data collection tools (like surveys) for inclusion and access. Most of my workshops start at the same place – where most have designed at least one survey in the current/past job/education. And then it takes three hours and some meaningful collective learning to realize that planning a survey is much more than just a list of questions. It is an opportunity to connect with your community directly, hear their stories, and understand their experiences and expressions of engagement. In this post, I want to share 5 "red flag" behaviors I often see during a survey design phase: ● When the only questions included are of positive feedback. We all love hearing good things, but only asking for positive feedback disables some real growth opportunities. Example: A question like, "What did you love most about our event?" assumes your respondent only loves the event, and then it offers no room for any different experience. ● When questions are overloaded with complicated words or jargon that only a few will know. You know your mission inside and out, but your community might not understand the same terms you do. Speak in their language. Think of your survey as a conversation. Example: A question like, "How would you rate the efficacy of our donor stewardship activities?" assumes everyone understands the details of "stewardship". ● When every possible question about every possible aspect of the mission is asked – because "why not". Designing surveys – without context – that go on for more than 10-12 minutes - can feel like asking for too much. Be mindful of the respondents and the needs of the data collection. Every question should have a purpose. ● When questions contradict anonymity. Our communities are diverse, and our surveys should hold a neat, safe space for those communities. Ensuring accessibility – balanced with truly useful demographic questions means not harming someone's anonymity – thus making the experience of collecting data easier and meaningful. Example: A survey asking about racial and ethnic diversity in a group of 99% homogenous population (thus making the 1% racially diverse population nervous about the possible breach of anonymity). ● When questions do not offer an 'Opt-Out' option by making everything required. Some questions may feel too personal or uncomfortable for individuals to respond to, and our surveys must create space for that. Give respondents the space to skip a question if they need to. Example: A survey that requires donors to disclose their income range without offering a way to skip the question if they're uncomfortable sharing that information. Stay tuned for a soon-to-be post on what we can do differently then. Have any other such behaviors? Share them here. In the meantime, try some of these resources (all designed to do good with data): https://lnkd.in/gUK-6M_Y #nonprofits #community
-
“Train-the-trainers” (TTT) is one of the most common methods used to scale up improvement & change capability across organisations, yet we often fail to set it up for success. A recent article, drawing on teacher professional development & transfer-of-training research, argues TTT should always be based on an “offer-and-use” model: OFFER: what the programme provides—facilitator expertise, session design, practice opportunities, feedback, follow-up support & evaluation. USE: what participants do with those opportunities—what they notice, how they make sense of it, how much they engage, what they learn, & whether they apply it in real work. How to design TTT that works & sticks: 1. Design for real-world use: Clarify the practical outcome - what trainers should do differently in their next sessions & what that should improve for the organisation. Plan beyond the classroom with post-course support so people can apply learning. Space learning over time rather than delivering it in one intensive block, because spacing & follow-ups support sustained use. 2. Use strong facilitators: Select facilitators who know the topic & how adults learn, how groups work & how to give useful feedback. Ensure they teach “how to make this stick at work” (apply & sustain practices), not only “how to deliver a session.” 3. Make practice central: Build the programme around realistic rehearsal: deliver, get feedback, & practise again until skills become automatic. Use participants’ real scenarios (especially change situations) to strengthen transfer. Include safe practice for difficult moments (challenge, unexpected questions) & treat mistakes as learning. Build peer learning so participants learn with & from each other, not just the facilitator. 4. Prepare participants to succeed: Assess what participants already know & can do, then tailor the learning. Build confidence to use skills at work (confidence predicts application). Help each person create a simple, specific plan for when & how they will use the approaches in their next training sessions. 5. Ensure workplace transfer support: Enable quick application (opportunities to deliver training soon after the course), plus time & resources to do it well. Provide ongoing support (feedback, coaching, & encouragement) from leaders, peers &/or the wider organisation. 6. Evaluate what matters: Go beyond satisfaction scores - assess whether trainers changed their practice & whether this improved outcomes for learners & the organisation. Use findings to improve the next iteration as a continuous improvement cycle, not a one-off event. https://lnkd.in/eJ-Xrxwm. By Prof. Dr. Susanne Wisshak & colleagues, sourced via John Whitfield MBA
-
A problem with the Kirkpatrick taxonomy (not a model, not a theory) of evaluating instruction is that by its very design it is evaluation by autopsy: We may know a program didn't work, but not what went wrong or how to fix it. Practitioners looking for other ideas might want to take a look at Robert Brinkerhoff, who in eyeing the idea of training as a process rather than an event said: "Evaluating a training program is like evaluating the wedding instead of the marriage." His success case method is a wonderful substitute or, if you must, supplement to, Kirkpatrick. And consider, too, work from Daniel Stufflebeam's CIPP model, that looks at an entire program from context to inputs to organizational support to outcomes and on to transferability. As a practitioner are you trying to prove results, or drive improvement? More: https://lnkd.in/eFWkR-5J
-
A lot of trainers run a great exercise… and then waste the learning moment that follows. The debrief is where performance improvement actually happens. But too often we get generic reflections: “Yeah, that was good” or “Interesting exercise.” None of that helps anyone perform better back on the job. A simple tool I use in almost every session, face-to-face or virtual, is the Feedback Grid. It structures the debrief so delegates can evaluate the outcomes of an exercise, not just how it felt. Here’s exactly how to use it straight after an activity: 1. Set up the 4 quadrants before the exercise Worked Well (+) Needs Change (Δ) Questions (?) New Ideas (💡) By having it visible from the start, delegates know there will be a structured review, not a free-for-all discussion. 2. Immediately after the exercise, ask individuals to add notes Give everyone 2–3 minutes to jot down their thoughts in each category. This stops dominant voices from setting the tone and gives you a broader view of what actually happened. In a virtual room, this is as simple as shared online sticky notes. Face-to-face, use flipcharts or a whiteboard. 3. Analyse the activity, not the activity’s “vibe” This is where most trainers go wrong. We’re not asking whether they “liked” the exercise. We’re capturing what the exercise showed about their skills, behaviours, and decision-making. Examples might include: Worked Well: “Clearer roles helped us move faster.” Needs Change: “We didn’t communicate early enough.” Questions: “How do we apply this under time pressure?” New Ideas: “Create a decision checklist before starting.” These are performance insights, not opinions. 4. Turn the grid into next-step actions Once patterns emerge, summarise 2–3 practical actions they can take into the workplace. This is where the ROI sits. The exercise becomes a rehearsal, and the grid becomes the bridge to real work. 5. Keep the pace tight A structured debrief shouldn’t drag. Five to eight minutes is enough to turn a simple exercise into a meaningful learning moment. When used properly, the Feedback Grid transforms exercises from “fun activities” into performance diagnostics. That’s the whole point of training, to improve what people do, not what they think about the training. What do you use for this? -------------------- Follow me at Sean McPheat for more L&D content and then hit the 🔔 button to stay updated on my future posts. ♻️ Save for later and repost to help others. 📄 Download a high-res PDF of this & 250 other infographics at: https://lnkd.in/eWPjAjV7
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning