80% of feedback never changes behavior. Not because people don’t care… But because of how it’s delivered. Your style and tone makes a difference. The feedback you give can spark change or trigger resistance. It’s not about being “nice” or “tough.” It’s about being strategic. Here are 5 approaches that turn tough conversations into growth opportunities: 1. COIN Method For when performance needs a reset. Most people jump straight to criticism. But starting with context creates safety. “In yesterday’s meeting…” feels specific. “You always…” feels like an attack. The magic is in the Next step: Don’t just point out problems. Co-create solutions. 2. SBI Model For when you’re recognizing wins or addressing gaps. Vague praise like “Great job” doesn’t teach. Specific feedback does. “When you asked that clarifying question, the client leaned in…” That’s something they can actually repeat. 3. STAR/AR Method For when someone’s ready to level up. Most feedback looks backward. This one builds forward. Review what happened → then explore alternatives. You’re not just fixing mistakes. You’re expanding capacity. 4. DESC Script For when you need to set boundaries. Boundaries don’t push people away. They build trust. The key is Express. Own your experience without blame. “I feel…” lands. “You make me feel…” doesn’t. That’s how accountability shifts. 5. GROW Model For when someone needs guidance, not answers Old-school feedback = “Here’s what to do.” GROW = “Let’s uncover it together.” The power move? Stay curious longer. Ask “What else?” at least 3 times. The best ideas usually come last. One more truth: timing beats technique. Give feedback within 48 hours when memory is fresh. Don’t fire off complaints in the moment. And don’t wait for the once-a-year performance review. Find the sweet spot where perspective is clear and the moment still matters. That’s when feedback creates growth. ♻️ Repost if this helps you (or your team) have conversations that actually create change. 👉 Follow Desiree Gruber for more tools on storytelling, leadership, and brand building.
What Are the Best Feedback Models
Explore top LinkedIn content from expert professionals.
Summary
Feedback models are structured approaches that help people share constructive input in a way that builds trust, clarity, and positive change. The best feedback models use clear examples, focus on actions, and encourage ongoing improvement rather than criticism.
- Use specific examples: Describe the situation and actions clearly so your feedback is easy to understand and relate to.
- Ask questions: Invite the other person to share their perspective and suggest ways to improve together.
- Follow up promptly: Give feedback soon after the event and check in later to support progress and build accountability.
-
-
The sandwich method is dead. Your team knows when you're cushioning. They see through the compliment-criticism-compliment formula. Their brain leaves your office half happy, half confused. And worse - they stop trusting you. True feedback is clear and honest. Here are 5 steps to provide clear feedback: - Be direct about what needs improvement. - Focus on actions, not personal traits. - Use specific examples to illustrate your point. - Encourage questions to clarify understanding. - Offer support for improvement. Try these 5 much effective models to give clear feedback: The SBI Model: → Situation: Describe what happened. → Behavior: Focus on actions, not thoughts. → Impact: Share the effect on the team or project. The Start-Stop-Continue Model: → Start: Suggest new actions to take. → Stop: Identify what’s not working. → Continue: Praise what is going well. The Radical Candor Framework: → Care Personally: Show empathy. → Challenge Directly: Be honest and clear. The Feedforward Model: → Focus on the future. → Ask how to improve next time. The CLEAR Model: → Clarify: Define the issue. → Listen: Hear their side. → Explore: Find solutions together. → Agree: Set next steps. → Review: Follow up to check progress. Each one builds confidence, accountability, and stronger performance conversations. 👉 What feedback have you been avoiding because you don't know how to say it clearly AND kindly? ♻️ Share and help your network provide effective feedback. 🔔 Ring the bell to get my posts.
-
𝗗𝗲𝗲𝗽 𝗗𝗶𝘃𝗲 𝗶𝗻𝘁𝗼 𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗟𝗮𝗿𝗴𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹𝘀 Very enlightening paper authored by a team of researchers specializing in computer vision and NLP, this survey underscores that pretraining—while fundamental—only sets the stage for LLM capabilities. The paper then highlights 𝗽𝗼𝘀𝘁-𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗺𝗲𝗰𝗵𝗮𝗻𝗶𝘀𝗺𝘀 (𝗳𝗶𝗻𝗲-𝘁𝘂𝗻𝗶𝗻𝗴, 𝗿𝗲𝗶𝗻𝗳𝗼𝗿𝗰𝗲𝗺𝗲𝗻𝘁 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴, 𝗮𝗻𝗱 𝘁𝗲𝘀𝘁-𝘁𝗶𝗺𝗲 𝘀𝗰𝗮𝗹𝗶𝗻𝗴) as the real game-changer for aligning LLMs with complex real-world needs. It offers: ◼️ A structured taxonomy of post-training techniques ◼️ Guidance on challenges such as hallucinations, catastrophic forgetting, reward hacking, and ethics ◼️ Future directions in model alignment and scalable adaptation In essence, it’s a playbook for making LLMs truly robust and user-centric. 𝗞𝗲𝘆 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀 𝗙𝗶𝗻𝗲-𝗧𝘂𝗻𝗶𝗻𝗴 𝗕𝗲𝘆𝗼𝗻𝗱 𝗩𝗮𝗻𝗶𝗹𝗹𝗮 𝗠𝗼𝗱𝗲𝗹𝘀 While raw pretrained LLMs capture broad linguistic patterns, they may lack domain expertise or the ability to follow instructions precisely. Targeted fine-tuning methods—like Instruction Tuning and Chain-of-Thought Tuning—unlock more specialized, high-accuracy performance for tasks ranging from creative writing to medical diagnostics. 𝗥𝗲𝗶𝗻𝗳𝗼𝗿𝗰𝗲𝗺𝗲𝗻𝘁 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗳𝗼𝗿 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 The authors show how RL-based methods (e.g., RLHF, DPO, GRPO) turn human or AI feedback into structured reward signals, nudging LLMs toward higher-quality, less toxic, or more logically sound outputs. This structured approach helps mitigate “hallucinations” and ensures models better reflect human values or domain-specific best practices. ⭐ 𝗜𝗻𝘁𝗲𝗿𝗲𝘀𝘁𝗶𝗻𝗴 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 ◾ 𝗥𝗲𝘄𝗮𝗿𝗱 𝗠𝗼𝗱𝗲𝗹𝗶𝗻𝗴 𝗜𝘀 𝗞𝗲𝘆: Rather than using absolute numerical scores, ranking-based feedback (e.g., pairwise preferences or partial ordering of responses) often gives LLMs a crisper, more nuanced way to learn from human annotations. Process vs. Outcome Rewards: It’s not just about the final answer; rewarding each step in a chain-of-thought fosters transparency and better “explainability.” ◾ 𝗠𝘂𝗹𝘁𝗶-𝗦𝘁𝗮𝗴𝗲 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴: The paper discusses iterative techniques that combine RL, supervised fine-tuning, and model distillation. This multi-stage approach lets a single strong “teacher” model pass on its refined skills to smaller, more efficient architectures—democratizing advanced capabilities without requiring massive compute. ◾ 𝗣𝘂𝗯𝗹𝗶𝗰 𝗥𝗲𝗽𝗼𝘀𝗶𝘁𝗼𝗿𝘆: The authors maintain a GitHub repo tracking the rapid developments in LLM post-training—great for staying up-to-date on the latest papers and benchmarks. Source : https://lnkd.in/gTKW4Jdh ☃ To continue getting such interesting Generative AI content/updates : https://lnkd.in/gXHP-9cW #GenAI #LLM #AI RealAIzation
-
The 4 Most Effective Feedback Models Yesterday I did a virtual keynote with a Middle Eastern governmental organisation on effective feedback. Feedback is essential to trust and connection. Done well it can strengthen connections further. Here is some of what I shared that you may find useful. 1. SBI + EBI Model (Situation–Behavior–Impact–Even Better If) • Situation: Describe when and where the behavior occurred. “In yesterday’s client call…” • Behavior: Describe exactly what the person did. “…you took the lead on explaining our new proposal.” • Impact: Explain the result or effect. “The client seemed more confident about our expertise.” • Even Better If: Offer a constructive suggestion for improvement. “It would be even better if you paused to invite questions earlier, to boost engagement.” 2. BOOST + EBI Model (Balanced–Observed–Objective–Specific–Timely–Even Better If) • Balanced: Acknowledge both positives and areas for growth. • Observed: Refer to things you personally witnessed. • Objective: Remove personal bias. • Specific: Provide concrete examples. • Timely: Deliver feedback soon after the event. • Even Better If: Conclude with one actionable recommendation. “Your presentation was well-paced. It would be even better if you used fewer slides to keep attention high.” 3. COIN + EBI Model (Context–Observation–Impact–Next Steps–Even Better If) • Context: Set the scene for when/where. • Observation: Describe specific behavior. • Impact: Share the effect on results, people, or outcomes. • Next Steps: Co-create solutions together. • Even Better If: Add a stretch goal or aspirational suggestion. “Your report was clear and data-driven. It would be even better if you added a short executive summary for quick reference.” 4. Radical Candor + EBI (Care Personally–Challenge Directly–Even Better If) • Care Personally: Show genuine respect and support. • Challenge Directly: Be honest and clear about what needs improvement. • Even Better If: Offer a suggestion that supports growth and mutual trust. “I know you’re deeply committed to excellence. It would be even better if you delegated more so the team can learn from you.” I hope this helps, do share it with anyone having to dole out feedback this time of year. Just one more speaking engagement to go to round out the year! Simone Heng #author #loneliness #humanconnection #keynotespeaker
-
Stop giving ineffective feedback. Here are 4 powerful models to use: 1. For 1-on-1 meetings: The S.B.I. Model ↳ Situation: Set the context ↳ Behavior: Describe specific actions ↳ Impact: Explain the consequences 2. For performance reviews: The GROW Model ↳ Goal: Set clear objectives ↳ Reality: Assess current situation ↳ Options: Explore possibilities ↳ Will: Commit to action 3. For team settings: 360-Degree Feedback ↳ Gather input from all directions ↳ Focus on specific competencies ↳ Provide a holistic view 4. For every situation: Feedback Sandwich ↳ Positive start: Open with encouragement ↳ Constructive core: Address areas for improvement ↳ Positive end: Close with reinforcement Remember: Effective feedback is: - Timely - Regular - Balanced - Actionable - Specific - Empathetic Which model will you try in your next feedback session? Share your thoughts below! 👇 --- Enjoyed this post? ♻ Repost to share with your network and follow me César Solís for more on strategy, professional development, and mindset.
-
Leaders: Stop winging feedback. Use frameworks that drive growth. Giving feedback isn’t easy - but winged feedback often leads nowhere. Without structure, your words might confuse, demotivate, or even disengage your team. Here are 4 feedback frameworks that create clarity, build trust, and drive growth (and 1 to avoid): 1) 3Cs: Celebrations, Challenges, Commitments 🏅 → Celebrate what’s working well. → Address challenges with honesty. → End with commitments for improvement. 2) Situation-Behavior-Impact (SBI) 💡 → Describe *specific* situations. → Focus on observed behavior. → Explain its impact on team or goals. 3) Radical Candor 🗣️ → Care personally while challenging directly. → Show empathy but stay honest. 4) GROW Model: Goal, Reality, Options, Will ⬆️ → Set goals for feedback. → Discuss current reality. → Explore options for growth. → Commit together on action steps. ❌ 5) DO NOT USE: Feedback Sandwich ❌ → Start with something positive. → Address areas needing growth. → Close with another positive. ‼️ This outdated model tends to backfire as people feel manipulated. Structured feedback isn’t just about improving performance. It builds trust, fosters open communication, and creates an environment for continuous learning. ❓Which framework do you use to give feedback? ♻ Share this post to help your network become top 1% communicators. 📌 Follow me Oliver Aust for more leadership insights.
-
If you are wondering how RLHF works, and how we can teach large language models to be helpful, harmless, and honest, read along 👇 The key isn’t just in scaling up model size, it’s in aligning models with human intent. The InstructGPT paper (2022) introduced a three-step process called Reinforcement Learning from Human Feedback (RLHF). And even today, it remains the foundation of how we build instruction-following models like ChatGPT. Let me walk you through the workflow in plain terms, based on the now-famous diagram below 👇 𝟭. 𝗦𝘂𝗽𝗲𝗿𝘃𝗶𝘀𝗲𝗱 𝗙𝗶𝗻𝗲-𝗧𝘂𝗻𝗶𝗻𝗴 (𝗦𝗙𝗧) → Start by showing the model examples of great answers to real prompts, written by humans. → These examples help the model learn how to respond: clear, direct, and grounded. → Think of this as training a junior writer by giving them a stack of perfect first drafts. → Even with a small dataset (13k samples), this creates a solid instruction-following base. 𝟮. 𝗥𝗲𝘄𝗮𝗿𝗱 𝗠𝗼𝗱𝗲𝗹 (𝗥𝗠) → Next, we collect several outputs for the same prompt and ask humans to rank them from best to worst. → We then train a separate model- the reward model, to predict those rankings. → Now, we’ve turned human preferences into a numerical score the model can optimize for. → This is the real magic: turning subjective feedback into something that can guide learning. 𝟯. 𝗥𝗲𝗶𝗻𝗳𝗼𝗿𝗰𝗲𝗺𝗲𝗻𝘁 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 (𝗣𝗣𝗢) → Now the model generates new answers, gets scored by the reward model, and adjusts its behavior to maximize reward. → We use Proximal Policy Optimization (PPO), an RL algorithm that gently nudges the model in the right direction without making it forget what it already knows. → A “KL penalty” keeps it from straying too far, like a seatbelt keeping it grounded. 𝗪𝗵𝘆 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀❓ ✅ A small 1.3B model trained with this pipeline outperformed GPT-3 (175B) in human evaluations. ✅ It generalized to unseen domains with little extra supervision. ✅ And it required orders of magnitude less data than pre-training. 𝗪𝗵𝗮𝘁 𝘁𝗵𝗶𝘀 𝗺𝗲𝗮𝗻𝘀 𝗳𝗼𝗿 𝗯𝘂𝗶𝗹𝗱𝗲𝗿𝘀❓ → Bigger isn’t always better. Better feedback leads to better behavior. → Pairwise comparisons are often more scalable than manual ratings. → RLHF lets us teach models values, not just vocabulary. If you're building AI systems, aligning them with human preferences isn’t just a safety concern- it’s a product strategy. --------- Share this with your network ♻️ Follow me (Aishwarya Srinivasan) for more AI insights.
-
98% of employees disengage when given little or no feedback. But giving it well? That’s where it gets tricky. Feedback often feels heavy. For the person giving it. And for the person receiving it. That’s why it can feel so easy to avoid. But it doesn’t have to be that way. The CLEAR framework has helped me approach these moments differently: C—Connect First ↳ Start with the relationship, not the issue. ↳ Trust makes the conversation easier. L—Listen Fully ↳ Ask how they feel things have been going. ↳ You might learn something that shifts everything. E—Explain What You Noticed ↳ Share observations, not opinions. ↳ Specifics help people understand. A—Ask for Their Perspective ↳ Invite them in with curiosity. ↳ There’s often more to the story. R—Request a Path Forward ↳ Work together on what comes next. ↳ This turns feedback into partnership. The best feedback conversations often don’t feel like feedback at all. They feel like two people figuring something out together. That’s been my experience, at least. Every leader finds their own way with feedback. This framework is just one path. What matters is that your people know you’re in their corner. What framework do you use to give feedback? ♻️ If this resonates, repost for your network. 📌 Follow Amy Gibson for more leadership insights. (98% Stat source: Workleap)
-
What if you can improve LLMs using direct customer feedback? 🤔 Most alignment methods like RLHF or DPO require multiple outputs from the same prompt to improve the model by learning the preferred one. 👀 In real-world use cases, you oftentimes only have the option to collect one-dimensional feedback, e.g., whether the response was good or bad. This made using techniques like DPO and RLHF on real-world data only hard. 💬 KTO tries to simplify this by using binary feedback on individual outputs. 👍🏻👎🏻 This is less complex and can be directly used on real-world data. 🌍 𝗛𝗼𝘄 𝘁𝗼 𝘂𝘀𝗲 𝗞𝗧𝗢: 1️⃣ Collect binary feedback data 2️⃣ Create a balanced (50/50) dataset out of triplets (prompt, response, feedback) 3️⃣ Apply KTO to a fine-tuned LLM using the dataset (2️⃣) 4️⃣ Evaluate outputs from the KTO model vs the old model using LLM as a judge or humans. 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀: 👩🔬 Contextual AI ran 56 experiments using DPO, KTO, and PPO (RLHF) 🚀 KTO matches the performance of DPO & PPO 🦙 KTO showed better performance on bigger models (llama 1) 💡 Simplifies data requirements compared to DPO & RLHF ⚖️ Need a balanced binarized dataset (50% good 50% bad examples) Report: https://lnkd.in/eDZW6R5G Github: https://lnkd.in/evzsj8uv KTO will soon be available in Hugging Face TRL. 🤗 Huge Kudos to Douwe Kiela and the team from Contextual AI for developing this new alignment method. 🚀
-
Most feedback doesn’t fail because it’s harsh. It fails because it’s vague. Over the years, I’ve seen the same pattern again and again. Leaders think they’re giving feedback. Teams think they’re being criticized. And in reality behavior doesn’t change. Why? Because instead of structure, there’s emotion. Instead of clarity - “you know what I meant.” Instead of growth - a defensive reaction. At some point, I realized a simple truth: feedback isn’t a conversation. It’s a tool. And like any tool, it only works if: — you know when to use it — which one to choose — and why you’re using it That’s why I put together 5 feedback frameworks that strong leaders actually use, not in theory, but in real teams: — when behavior needs correction — when strengths need reinforcement — when a team is stuck — when someone is blocked in their growth — when truth matters more than comfort 👉 All 5 are broken down in the infographic below. If you lead people (or plan to), these frameworks will save you months of friction. 💬 Which one do you use most consciously or not? — Natan Mohart
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development