Customer Feedback Management Systems

Explore top LinkedIn content from expert professionals.

  • View profile for Luke Yun

    Founder @ Decisive Machines | AI Researcher @ Harvard Medical School

    33,108 followers

    Stanford researchers just introduced a new way to optimize AI models using text-based feedback instead of traditional backpropagation! Deep learning has long relied on numerical gradients to fine-tune neural networks. But, optimizing generative AI systems has been much harder because they interact using natural language, not numbers. 𝗧𝗲𝘅𝘁𝗚𝗿𝗮𝗱 𝗶𝘀 𝘁𝗵𝗲 𝗳𝗶𝗿𝘀𝘁 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 𝘁𝗼 𝗯𝗮𝗰𝗸𝗽𝗿𝗼𝗽𝗮𝗴𝗮𝘁𝗲 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗺𝗼𝗱𝗲𝗹 𝗳𝗲𝗲𝗱𝗯𝗮𝗰𝗸, 𝗲𝗻𝗮𝗯𝗹𝗶𝗻𝗴 𝗔𝗜 𝘁𝗼 𝗶𝘁𝗲𝗿𝗮𝘁𝗶𝘃𝗲𝗹𝘆 𝗿𝗲𝗳𝗶𝗻𝗲 𝗶𝘁𝘀 𝗼𝘂𝘁𝗽𝘂𝘁𝘀 𝗮𝗰𝗿𝗼𝘀𝘀 𝗱𝗶𝘃𝗲𝗿𝘀𝗲 𝘁𝗮𝘀𝗸𝘀. 1. Improved AI performance in PhD-level science Q&A, raising accuracy from 51.0% to 55.0% on GPQA and from 91.2% to 95.1% on MMLU physics. 2. Optimized medical treatment plans, outperforming human-designed radiotherapy plans by better balancing tumor targeting and organ protection. 3. Enhanced AI-driven drug discovery by iteratively refining molecular structures, generating high-affinity compounds faster than traditional methods. 4. Boosted complex AI agents like Chameleon, increasing multimodal reasoning accuracy by 7.7% through iterative feedback refinement. 𝗧𝗵𝗲 𝘂𝘀𝗲 𝗼𝗳 "𝘁𝗲𝘅𝘁𝘂𝗮𝗹 𝗴𝗿𝗮𝗱𝗶𝗲𝗻𝘁𝘀" 𝗶𝗻𝘀𝘁𝗲𝗮𝗱 𝗼𝗳 𝗻𝘂𝗺𝗲𝗿𝗶𝗰𝗮𝗹 𝗴𝗿𝗮𝗱𝗶𝗲𝗻𝘁𝘀 𝗶𝘀 𝗽𝗿𝗲𝘁𝘁𝘆 𝗱𝗮𝗿𝗻 𝗰𝗼𝗼𝗹. It treats LLM feedback as “textual gradients” which are collected from every use of a variable in the system. By aggregating critiques from different contexts and iteratively updating variables (using a process analogous to numerical gradient descent), the method smooths out individual inconsistencies. 𝗜'𝗺 𝗰𝘂𝗿𝗶𝗼𝘂𝘀 𝗮𝗯𝗼𝘂𝘁 𝗵𝗼𝘄 𝗳𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗶𝗻𝗴 𝗺𝗲𝘁𝗵𝗼𝗱𝘀 𝘁𝗼 𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗲 𝗮𝗻𝗱 𝗰𝗼𝗻𝘀𝘁𝗿𝗮𝗶𝗻 𝘁𝗲𝘅𝘁𝘂𝗮𝗹 𝗴𝗿𝗮𝗱𝗶𝗲𝗻𝘁𝘀 beyond formalization of the propagation and update process via the equations could be developed to enhance robustness. Perhaps training secondary models to evaluate the quality and consistency of textual gradients or an ensemble approach of generating multiple textual gradients using different LLMs or multiple prompts? Just throwing some ideas out there; this stuff is pretty cool. Here's the awesome work: https://lnkd.in/gX8ABsdM Congrats to Mert Yuksekgonul, Federico Bianchi, Joseph Boen, James Zou, and co! I post my takes on the latest developments in health AI – 𝗰𝗼𝗻𝗻𝗲𝗰𝘁 𝘄𝗶𝘁𝗵 𝗺𝗲 𝘁𝗼 𝘀𝘁𝗮𝘆 𝘂𝗽𝗱𝗮𝘁𝗲𝗱! Also, check out my health AI blog here: https://lnkd.in/g3nrQFxW

  • View profile for Oliver Aust
    Oliver Aust Oliver Aust is an Influencer

    Follow to become a top 1% communicator I Founder of Speak Like a CEO Academy I Bestselling 4 x Author I Host of Speak Like a CEO podcast I I help leaders communicate with clarity, confidence and impact when it matters

    130,136 followers

    Leaders: Stop winging feedback. Use frameworks that drive growth. Giving feedback isn’t easy - but winged feedback often leads nowhere. Without structure, your words might confuse, demotivate, or even disengage your team. Here are 4 feedback frameworks that create clarity, build trust, and drive growth (and 1 to avoid): 1) 3Cs: Celebrations, Challenges, Commitments 🏅  → Celebrate what’s working well. → Address challenges with honesty. → End with commitments for improvement. 2) Situation-Behavior-Impact (SBI) 💡  → Describe *specific* situations. → Focus on observed behavior. → Explain its impact on team or goals. 3) Radical Candor 🗣️  → Care personally while challenging directly. → Show empathy but stay honest. 4) GROW Model: Goal, Reality, Options, Will ⬆️  → Set goals for feedback. → Discuss current reality. → Explore options for growth. → Commit together on action steps. ❌ 5) DO NOT USE: Feedback Sandwich ❌  → Start with something positive. → Address areas needing growth. → Close with another positive. ‼️ This outdated model tends to backfire as people feel manipulated. Structured feedback isn’t just about improving performance. It builds trust, fosters open communication, and creates an environment for continuous learning. ❓Which framework do you use to give feedback? ♻ Share this post to help your network become top 1% communicators. 📌 Follow me Oliver Aust for more leadership insights.

  • View profile for Mike Rizzo

    Certifying the future of GTM professionals. Community-led Founder & CEO @ MarketingOps.com and MO Pros® - where 4,000+ Marketing Operations, GTM Ops, and Revenue Ops professionals architect revenue growth.

    19,753 followers

    What’s the biggest challenge you face in reporting? Drop a comment below. There's a better way to build your reports and dashboards... Your stakeholders are going to love you for it. Treat it like a product and ensure you get feedback too! More on the subject of this post below: Most MOps teams struggle with reporting—ad-hoc requests, inconsistent metrics, and dashboards that don’t align with business priorities. A structured reporting framework solves this by bringing clarity, alignment, and actionability to marketing operations. Why does a reporting framework matter? B/C -- without a reporting framework, teams waste time tracking irrelevant metrics or presenting data that doesn’t resonate with decision-makers. Here's what Michael Hartmann helped pull together in the B2B Marketing Metrics Framework: ∙ Operational Metrics – Tracks system health, data flow, and process efficiency. ∙ Campaign Metrics – Measures the performance of specific initiatives. ∙ Funnel Metrics – Shows how buyers move through the journey. ∙ Contribution Metrics – Ties marketing efforts to business outcomes. ∙ Narrative Metrics – Helps non-marketers understand data with storytelling. If you're looking to understand the HOW of this type of framework... start here: ∙ Understand Stakeholder Needs – Identify which metrics matter to different teams. ∙ Map Data Sources – Ensure integration between CRMs, automation tools, and BI platforms. ∙ Standardize Metrics – Define KPIs across departments to eliminate confusion. ∙ Design for Action – Focus dashboards on insights and next steps. Implement Feedback Loops – Adjust reports based on stakeholder input. Also, here are some common mistakes you should avoid: ∙ Overloading reports – More data isn’t always better. Keep it relevant. Ignoring stakeholder alignment – What sales needs is different from what executives need. ∙ Neglecting data hygiene – Bad data leads to bad decisions. A strong reporting framework turns marketing ops from a tactical function into a strategic enabler of business growth.

  • 𝗗𝗶𝗱 𝘆𝗼𝘂 𝗽𝗹𝗮𝗰𝗲 𝗮 "𝗖𝗮𝗻𝗮𝗿𝘆" 𝗶𝗻 𝘆𝗼𝘂𝗿 𝗣𝗿𝗼𝗰𝗲𝘀𝘀 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻? The sort of early warning detection system which monitors your automated processes and sings when irregularities occur? Why a 🐤 𝗰𝗮𝗻𝗮𝗿𝘆 you ask? Around 1911, miners started to take canary birds into the coal mines to detect the accumulation of toxic gases. These birds, would even sense the smallest traces and emissions, starting to erratically chirp and with that giving miners early warnings to immediately evacuate the mine. Just as the canaries once did in the mines, 𝗮 𝗱𝗶𝗴𝗶𝘁𝗮𝗹 "𝗰𝗮𝗻𝗮𝗿𝘆" can play a vital role in monitoring the health of your automated workflows signalling potential issues before they escalate and perhaps, cause scaled harm. But how do you implement a digital canary into your workflows in your process automation? 𝗜𝗻𝗰𝗼𝗿𝗽𝗼𝗿𝗮𝘁𝗲 𝗶𝘁 𝗳𝗿𝗼𝗺 𝘀𝘁𝗮𝗿𝘁: into your design by using code, reconciliation reports, and validation rules to establish effective in-process control checks and monitoring mechanisms and visual dashboards to analyse red flags. Find here 5 examples how to get early alerts in your process automation, even if your automation bots don't know how to sing: ▪️𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴 𝗖𝗵𝗲𝗰𝗸𝘀: Implement automated checks at various stages of the process to ensure accuracy and completeness and volume variations. ▪️𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 𝗘𝗿𝗿𝗼𝗿 𝗩𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗼𝗻: Monitor integration and break points like API's for errors or failures to maintain seamless data flow across systems. ▪️𝗗𝗮𝘁𝗮 𝗜𝗻𝘁𝗲𝗴𝗿𝗶𝘁𝘆 𝗦𝗰𝗮𝗻𝘀: Validate for duplicate records or inconsistencies to maintain data integrity and remove manual overrides or corrections. ▪️𝗨𝘀𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸𝘀: Analyse insights from user feedbacks to check on usability issues, frequent issues and detect sentiment drops with NLP / AI. ▪️𝗖𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝗰𝗲 𝗖𝗼𝗰𝗸𝗽𝗶𝘁: Create a centralised dashboard to monitor compliance metrics to detect red flags and and detect deviations from policies. By integrating digital canaries into your process automation strategy, you are not only enhance your ability to detect and respond to issues rapidly but also promote a culture of self-monitoring and continuous improvement. So, did you already place a digital "canary" into your process design and automations? If not, maybe it's time to reconsider adding this early warning system to your automation approach ensuring the health and resilience of your tasks, data & process performance. What early warning systems have worked for you best? #processautomation #intelligentautomation #rpa #processexcellence

  • A lot of teams building AI agents and chatbots start with an assumption like "improving production quality" == "review customer feedback". NVIDIA just published a new paper that shows why this approach alone is flawed. The paper is titled: "Adaptive Data Flywheel: Applying MAPE Control Loops to AI Agent Improvement." I love the premise and the data flywheel model is 100% right. 🙌 Link to the paper in the comments. But the paper also points to why a much more robust monitoring loop is needed for production systems: * "The evaluation of 495 feedback samples showed that routing errors and rephrasal errors combined to make up only 8.45% of total failure cases" (significant minority of error cases!) * "These failure points were identified through analysis of 495 negative feedback samples collected over 3 months" (that's a long time!) * "The system received feedback from 495 employees out of 30,000 users which shows difficulties in obtaining large-scale feedback data. The insufficient number of participants in the study creates sampling bias which reduces the generalizability of the obtained results." (small sample size!) The paper concludes that more is needed: Automated Error Attribution and Continuous Learning. This is where it helps to use LLM judges and categorization techniques on log samples, and why we've been so bullish on that technique. They give a lot more insight into what's happening and make it easy to filter down to find errors, even when customers leave no feedback. And they provide great signal for continuous improvement.

  • View profile for Karen Kim

    CEO @ Human Managed, the AI Service Platform for Cyber, Risk, and Digital Ops.

    5,892 followers

    User Feedback Loops: the missing piece in AI success? AI is only as good as the data it learns from -- but what happens after deployment? Many businesses focus on building AI products but miss a critical step: ensuring their outputs continue to improve with real-world use. Without a structured feedback loop, AI risks stagnating, delivering outdated insights, or losing relevance quickly. Instead of treating AI as a one-and-done solution, companies need workflows that continuously refine and adapt based on actual usage. That means capturing how users interact with AI outputs, where it succeeds, and where it fails. At Human Managed, we’ve embedded real-time feedback loops into our products, allowing customers to rate and review AI-generated intelligence. Users can flag insights as: 🔘Irrelevant 🔘Inaccurate 🔘Not Useful 🔘Others Every input is fed back into our system to fine-tune recommendations, improve accuracy, and enhance relevance over time. This is more than a quality check -- it’s a competitive advantage. - for CEOs & Product Leaders: AI-powered services that evolve with user behavior create stickier, high-retention experiences. - for Data Leaders: Dynamic feedback loops ensure AI systems stay aligned with shifting business realities. - for Cybersecurity & Compliance Teams: User validation enhances AI-driven threat detection, reducing false positives and improving response accuracy. An AI model that never learns from its users is already outdated. The best AI isn’t just trained -- it continuously evolves.

  • View profile for Nathan Lambert

    Building Open Language Models @ Allen Institute for AI

    33,975 followers

    First draft online version of The RLHF Book is DONE. Recently I've been creating the advanced discussion chapters on everything from Constitutional AI to evaluation and character training, but I also sneak in consistent improvements to the RL specific chapter. https://rlhfbook.com/ RLHF has a long future ahead of it and this will do a lot to make it more accessible to the next generation. What's next: Getting a physical copy in your hands (may not be exactly 1to1, we'll see) and minor fixes at a slower cadence (thanks to many github contributors, some of you will get a copy from me). Here are all the chapters. 1. Introduction: Overview of RLHF and what this book provides. 2. Seminal (Recent) Works: Key models and papers in the history of RLHF techniques. 3. Definitions: Mathematical definitions for RL, language modeling, and other ML techniques leveraged in this book. 4. RLHF Training Overview: How the training objective for RLHF is designed and basics of understanding it. 5. What are preferences?: Why human preference data is needed to fuel and understand RLHF. 6. Preference Data: How preference data is collected for RLHF. 7. Reward Modeling: Training reward models from preference data that act as an optimization target for RL training (or for use in data filtering). 8. Regularization: Tools to constrain these optimization tools to effective regions of the parameter space. 9. Instruction Tuning: Adapting language models to the question-answer format. 10. Rejection Sampling: A basic technique for using a reward model with instruction tuning to align models. 11. Policy Gradients: The core RL techniques used to optimize reward models (and other signals) throughout RLHF. 12. Direct Alignment Algorithms: Algorithms that optimize the RLHF objective directly from pairwise preference data rather than learning a reward model first. 13. Constitutional AI and AI Feedback: How AI feedback data and specific models designed to simulate human preference ratings work. 14. Reasoning and Reinforcement Finetuning: The role of new RL training methods for inference-time scaling with respect to post-training and RLHF. 15. Synthetic Data: The shift away from human to synthetic data and how distilling from other models is used. 16. Evaluation: The ever-evolving role of evaluation (and prompting) in language models. 17. Over-optimization: Qualitative observations of why RLHF goes wrong and why over-optimization is inevitable with a soft optimization target in reward models. 18. Style and Information: How RLHF is often underestimated in its role in improving the user experience of models due to the crucial role that style plays in information sharing. 19. Product, UX, Character: How RLHF is shifting in its applicability as major AI laboratories use it to subtly match their models to their products. 

  • View profile for Brandon Redlinger

    Fractional VP of Marketing for B2B SaaS + AI | Get weekly AI tips, tricks & secrets for marketers at stackandscale.ai (subscribe for free).

    30,584 followers

    I built a Clay workflow to monitor brand mentions across social media and turn them into GTM signals my team can act on. Most teams either ignore social chatter, drown in it or react when their investor sends them something they saw. None of these works. They lack a strategy and a system to enable social listening. My Clay workbook listens across Reddit, LinkedIn, and Twitter/X, analyzes sentiment, summarizes the context, and drops a clean signal straight into Slack. Here’s how the workflow works: – Pull brand mentions from Reddit, LinkedIn company mentions, and Twitter/X keywords – Visit the source URL to extract the actual post text – Analyze sentiment and assign a score so you know if it is positive, neutral, or risky – Generate a short summary instead of dumping raw text – Send everything into a dedicated Slack channel in near real time What I love most about this is how many use cases this unlocked. If sentiment is positive, you get instant feedback on what messaging resonates. If sentiment is negative, you catch brand risk early before it spreads. If buyers are talking about a problem you solve, you spot pipeline signals hiding in public conversations. And because this lives in Clay, you control everything: Keywords, sources, frequency, models and event costs. This replaces expensive social listening tools and gives GTM teams something better... a living feedback loop tied to action. If you want the full walkthrough and Clay template, it's in this week's Stack & Scale episode. Or comment "Clay workflow" AND connect with me (I've run out of InMail already), and I'll send the resources directly. Happy (social) listening! 

  • View profile for Jonathan Shroyer

    Gaming at iQor | Foresite Inventor | 3X Exit Founder, 20X Investor Return | Keynote Speaker, 100+ stages

    22,075 followers

    Most teams drown in feedback and starve for insight. I’ve felt that pain across CX, SaaS, retail—and especially in gaming, where Discord, reviews, and LiveOps telemetry never sleep. The unlock wasn’t “more data.” It was AI turning feedback → insight → action in hours, not weeks. Here’s what changed for me: Ingest everything, once. Tickets, app reviews, Discord threads, calls, streams—normalized and de-duplicated with PII handled by default. Enrich automatically. LLMs tag topics, intent, and aspect-level sentiment (what players love/hate about this feature in this build). Act where work happens. Copilots draft Jira issues with evidence, propose fixes, and close the loop with customers—human-in-the-loop for quality. Measure what matters. Not just CSAT. In gaming: retention, ARPDAU, event participation. In other industries: conversion, refund rate, cost-to-serve. Gaming example: a balance tweak drops; AI cross-references sentiment from Spanish/Portuguese Discord channels with session logs and flags a difficulty spike for new players on Android. Product gets a one-pager with root cause, repro steps, and a recommended hotfix—before social blows up. That’s the difference between a rocky patch and a win. This isn’t just for studios. Healthcare, fintech, DTC, SaaS—same playbook, different telemetry. I put my approach into a 2025 AI Feedback Playbook: architecture, workflows, guardrails, and a 30/60/90 rollout you can start tomorrow. If you lead Product, CX, Support, or LiveOps, it’s built for you. 👉 I’d love your take—what’s the hardest part of your feedback loop right now? Link in comments. 💬 #AI #CustomerExperience #Gaming #LiveOps #ProductManagement #VoiceOfCustomer #LLM #Leadership #CXOps

  • View profile for Peter Kang

    Acquiring & growing specialized agencies ($500k-$1.5M EBITDA), Co-founder of Barrel Holdings, Author of The Holdco Guide

    14,020 followers

    A loyal, multi‑year client ends a retainer with barely a goodbye email. Projects hit deadlines, budgets held, and yet the relationship still slipped away... In agency land, client churn rarely arrives as a dramatic flare‑up. More often it is a quiet drift: Slack threads go cold, the next‑quarter brief never shows, and the renewal line stays blank. The danger is that it feels painless until you add up the lost lifetime value, the scramble to backfill revenue, and the referrals that were never even requested. Silent churn hides in the gap between delivery and relationship management. Whenever “no news” is mistaken for “all good,” the countdown has already started. Let's apply a systems approach as we would across our Barrel Holdings agencies: The silent‑churn autopsy: - No quarterly business reviews (QBRs) or formal check‑ins - Value delivered wasn’t documented or celebrated - Leadership lacked a dashboard for account health - Post‑project follow‑ups never happened - Referral and expansion opportunities quietly died on the vine 1. Map the breakdown: - Missing QBR rhythm, feedback loops, health scorecards - No early‑warning indicators or escalation paths - No structured post‑delivery cadence to drive referrals 2. Re‑ground the team in core fundamentals: - Communicate exceptionally: relationships need rituals - Surface value: delivered work must be made visible - Define “healthy” clearly: simple, shared success metrics - Learn fast: lost clients become internal case studies, not mysteries 3. Fix the operational gaps: - Launch quarterly client feedback surveys (explore NPS + open prompts) - Add project debriefs/AARs as a mandatory close‑out step - Assign strategic sponsors to top‑tier accounts and track health scores in a live dashboard - Standardize a QBR template: goals, wins, upcoming risks, growth ideas 4. Reinforce with structure, rhythm, visibility, incentives, feedback: - Every key account has an owner responsible for retention insights - QBRs and health‑score reviews run every quarter, no skips - Account dashboards shared in weekly leadership meetings - Retention metrics baked into performance reviews and shout‑outs - Client survey results drive immediate tweaks to delivery SOPs 5. Watch the ripple effects: - AMs may need coaching to lead strategic conversations - PMs tie delivery metrics to client value, not just deadlines - Strong retention fuels referrals and upsells, compounding growth Success looks like: - 100% of top‑tier clients receive a QBR every quarter - Live health scores flag at‑risk accounts before contracts lapse - Churn rate drops, referral revenue climbs - Relationship health becomes a line item in every leadership review - Silent churn ends when relationship stewardship is systemized, not left to chance. == 🟢 Find this useful? Subscribe to AgencyHabits for weekly systems‑thinking insights. The full Agency Systems Playbook drops in May—subscribers get first access.

Explore categories