AI handled 75% of customer chats at Klarna… and they still brought humans back. Why? Because speed isn’t the same as quality! Because customers noticed the difference. And it wasn’t good. Speed? Great. Empathy? Missing. Trust? Slipping. After a year of leaning heavily on AI, they’re rehiring human support agents. Real people. Not because AI failed—but because it wasn’t enough. AI can answer your question. But only a human can make you feel heard. Klarna is now hiring in rural areas and among student communities—betting on empathy, not just efficiency. This should be a wake-up call. You can automate tasks. But relationships? They still need people! This is why the future isn’t human vs AI. It’s human with AI. And the companies who get that balance right? They’ll win customer loyalty, and talent, faster than any chatbot ever could.
Utilizing AI in Customer Support
Explore top LinkedIn content from expert professionals.
-
-
This Stanford study examined how six major AI companies (Anthropic, OpenAI, Google, Meta, Microsoft, and Amazon) handle user data from chatbot conversations. Here are the main privacy concerns. 👀 All six companies use chat data for training by default, though some allow opt-out 👀 Data retention is often indefinite, with personal information stored long-term 👀 Cross-platform data merging occurs at multi-product companies (Google, Meta, Microsoft, Amazon) 👀 Children's data is handled inconsistently, with most companies not adequately protecting minors 👀 Limited transparency in privacy policies, which are complex and hard to understand and often lack crucial details about actual practices Practical Takeaways for Acceptable Use Policy and Training for nonprofits in using generative AI: ✅ Assume anything you share will be used for training - sensitive information, uploaded files, health details, biometric data, etc. ✅ Opt out when possible - proactively disable data collection for training (Meta is the one where you cannot) ✅ Information cascades through ecosystems - your inputs can lead to inferences that affect ads, recommendations, and potentially insurance or other third parties ✅ Special concern for children's data - age verification and consent protections are inconsistent Some questions to consider in acceptable use policies and to incorporate in any training. ❓ What types of sensitive information might your nonprofit staff share with generative AI? ❓ Does your nonprofit currently specifically identify what is considered “sensitive information” (beyond PID) and should not be shared with GenerativeAI ? Is this incorporated into training? ❓ Are you working with children, people with health conditions, or others whose data could be particularly harmful if leaked or misused? ❓ What would be the consequences if sensitive information or strategic organizational data ended up being used to train AI models? How might this affect trust, compliance, or your mission? How is this communicated in training and policy? Across the board, the Stanford research points that developers’ privacy policies lack essential information about their practices. They recommend policymakers and developers address data privacy challenges posed by LLM-powered chatbots through comprehensive federal privacy regulation, affirmative opt-in for model training, and filtering personal information from chat inputs by default. “We need to promote innovation in privacy-preserving AI, so that user privacy isn’t an afterthought." How are you advocating for privacy-preserving AI? How are you educating your staff to navigate this challenge? https://lnkd.in/g3RmbEwD
-
AI in Customer Support isn’t new. I’ve been rethinking how we actually use it. Customer Support is moving past basic "faster replies" and learning to implement Claude as a core part of our workflow. The goal? Shifting from reactive firefighting to structured, scalable systems. It’s a work in progress, but here is the blueprint we’re using to turn Claude into a true CX reasoning engine: 1️⃣ It’s not about speed. It’s about structure. Yes, you can draft replies faster. But the real value comes from setting it up properly: → align it with your tone and guidelines → connect it to your knowledge base → define clear boundaries (what it can and can’t say) → train it to understand context, not just keywords That’s how you get consistent, reliable output across the team. 2️⃣ It helps move Support from reactive → proactive Used well, it’s not just answering tickets. It’s helping you: → detect sentiment and urgency → identify recurring friction points → surface gaps in self-service → spot early churn signals That’s where Support starts influencing the whole customer experience. 3️⃣ It fits into your existing workflows (not replaces them) The most effective setups I’ve seen are simple: → Claude + Zendesk → ticket analysis → Claude + Zapier → automate workflows → Claude + Gong→ review calls → Claude + Intercom → inbox support → Claude + n8n → workflow automation → Claude + Notion → knowledge management No complex rebuilds. Just better use of what you already have. 4️⃣ The quality of output = quality of input Small things make a big difference: → assign a role (support agent, CX lead, analyst) → provide context (customer, goal, constraints) → iterate with examples (good vs bad responses) Without this, you get generic answers. With it, you get something your team can actually use. From a leadership perspective, this isn’t about “adding AI.” It’s about designing how your Support team operates at scale. Because the goal isn’t to answer more tickets. It’s to build a system where fewer things break, and when they do, the experience still feels consistent. If you’re already using AI in Support, what’s actually working for you? 👇
-
AI is getting more personal — and it’s changing how global brands connect with consumers. We’re entering an era where AI doesn’t just automate — it individualizes. From product design to marketing, personalization is becoming the new standard of brand experience. 🟣 Nike uses AI to tailor product recommendations and predict purchasing behavior through its SNKRS and Nike App platforms — driving a reported 40% increase in engagement. 🟢 Coca-Cola leveraged generative AI for its “Create Real Magic” campaign, allowing fans to co-create digital art and content, reaching over 2 billion impressions globally. 🔵 Starbucks uses its “Deep Brew” AI engine to personalize offers and store operations, contributing to a 10–15% lift in loyalty engagement. 🔴 Netflix attributes over 80% of viewership to AI-driven recommendations — proving how deeply personalization drives retention. What’s changing is not just the technology, but the intent: AI is no longer about scaling efficiency — it’s about scaling empathy. The brands that lead this shift are turning data into connection, algorithms into experience, and scale into trust. #ArtificialIntelligence #Personalization #BrandInnovation #MarketingAI #CustomerExperience #GenerativeAI via @tingle.ai #DigitalTransformation #Ai
-
𝐑𝐞𝐚𝐥 𝐨𝐩𝐩𝐨𝐫𝐭𝐮𝐧𝐢𝐭𝐲 𝐟𝐨𝐫 𝐀𝐈 𝐚𝐮𝐭𝐨𝐦𝐚𝐭𝐢𝐧𝐠 𝐞𝐧𝐭𝐞𝐫𝐩𝐫𝐢𝐬𝐞 𝐰𝐨𝐫𝐤𝐟𝐥𝐨𝐰𝐬 I have been meeting with many enterprise CXOs and AI advisory firms about AI adoption over the last few months. Almost all of them start the same way: 1. Map the current workflows. 2. Identify the manual steps. 3. Find where people are spending time. 4. Layer AI on top to automate or accelerate the work. This is the default playbook. And it is not wrong. It is the safe, best way to test and show quick results. A great entry point for AI. 𝐄𝐱𝐚𝐦𝐩𝐥𝐞: 𝐂𝐮𝐬𝐭𝐨𝐦𝐞𝐫 𝐬𝐮𝐩𝐩𝐨𝐫𝐭 𝐰𝐨𝐫𝐤𝐟𝐥𝐨w 1. Customer calls in. 2. L1 agent picks up, follows a script. 3. Cannot resolve. Escalates to L2. L2 reads the notes, asks the customer to repeat the problem, checks the knowledge base. Maybe escalates to L3. 4. Resolution happens 3 handoffs and 48 hours later. Most enterprise AI deployments in customer support follow the same default playbook: 1. Automating L1 with a voicebot 2. L2 with AI-assisted responses 3. Giving L3 a copilot. Same tiers, same structure, just faster and cheaper. 𝐖𝐡𝐲 𝐝𝐨 𝐭𝐡𝐞𝐬𝐞 𝐰𝐨𝐫𝐤𝐟𝐥𝐨𝐰𝐬 𝐞𝐱𝐢𝐬𝐭 𝐢𝐧 𝐭𝐡𝐞 𝐟𝐢𝐫𝐬𝐭 𝐩𝐥𝐚𝐜𝐞? Most processes were designed around human limitations — quality, consistency, onboarding, training, error containment. 𝑩𝒖𝒕 𝒘𝒐𝒓𝒌𝒇𝒍𝒐𝒘𝒔 𝒂𝒓𝒆 𝒏𝒐𝒕 𝒕𝒉𝒆 𝒈𝒐𝒂𝒍. 𝑻𝒉𝒆𝒚 𝒂𝒓𝒆 𝒂 𝒎𝒆𝒂𝒏𝒔 𝒕𝒐 𝒕𝒉𝒆 𝒈𝒐𝒂𝒍. The goal was never "route through 3 tiers." If AI can access the full knowledge base, understand context, and maintain quality — why not give the customer or a single agent an AI tool that resolves it directly? Three tiers collapse into one. 𝐓𝐡𝐞 𝐫𝐞𝐚𝐥 𝐨𝐩𝐩𝐨𝐫𝐭𝐮𝐧𝐢𝐭𝐲 is to return to the original objective and move from multi-step process to single-step outcome as confidence builds. This is also where the biggest opening exists for new AI startups — not workflow automation, but outcome-based automation. 𝐈𝐌𝐏𝐎𝐑𝐓𝐀𝐍𝐓: Before you automate your current workflows, ask why they exist. The enterprises that will get the biggest AI wins are the ones redesigning toward outcomes — not just making existing steps faster.
-
This new white paper by Stanford Institute for Human-Centered Artificial Intelligence (HAI) titled "Rethinking Privacy in the AI Era" addresses the intersection of data privacy and AI development, highlighting the challenges and proposing solutions for mitigating privacy risks. It outlines the current data protection landscape, including the Fair Information Practice Principles, GDPR, and U.S. state privacy laws, and discusses the distinction and regulatory implications between predictive and generative AI. The paper argues that AI's reliance on extensive data collection presents unique privacy risks at both individual and societal levels, noting that existing laws are inadequate for the emerging challenges posed by AI systems, because they don't fully tackle the shortcomings of the Fair Information Practice Principles (FIPs) framework or concentrate adequately on the comprehensive data governance measures necessary for regulating data used in AI development. According to the paper, FIPs are outdated and not well-suited for modern data and AI complexities, because: - They do not address the power imbalance between data collectors and individuals. - FIPs fail to enforce data minimization and purpose limitation effectively. - The framework places too much responsibility on individuals for privacy management. - Allows for data collection by default, putting the onus on individuals to opt out. - Focuses on procedural rather than substantive protections. - Struggles with the concepts of consent and legitimate interest, complicating privacy management. It emphasizes the need for new regulatory approaches that go beyond current privacy legislation to effectively manage the risks associated with AI-driven data acquisition and processing. The paper suggests three key strategies to mitigate the privacy harms of AI: 1.) Denormalize Data Collection by Default: Shift from opt-out to opt-in data collection models to facilitate true data minimization. This approach emphasizes "privacy by default" and the need for technical standards and infrastructure that enable meaningful consent mechanisms. 2.) Focus on the AI Data Supply Chain: Enhance privacy and data protection by ensuring dataset transparency and accountability throughout the entire lifecycle of data. This includes a call for regulatory frameworks that address data privacy comprehensively across the data supply chain. 3.) Flip the Script on Personal Data Management: Encourage the development of new governance mechanisms and technical infrastructures, such as data intermediaries and data permissioning systems, to automate and support the exercise of individual data rights and preferences. This strategy aims to empower individuals by facilitating easier management and control of their personal data in the context of AI. by Dr. Jennifer King Caroline Meinhardt Link: https://lnkd.in/dniktn3V
-
𝟔𝟔% 𝐨𝐟 𝐀𝐈 𝐮𝐬𝐞𝐫𝐬 𝐬𝐚𝐲 𝐝𝐚𝐭𝐚 𝐩𝐫𝐢𝐯𝐚𝐜𝐲 𝐢𝐬 𝐭𝐡𝐞𝐢𝐫 𝐭𝐨𝐩 𝐜𝐨𝐧𝐜𝐞𝐫𝐧. What does that tell us? Trust isn’t just a feature - it’s the foundation of AI’s future. When breaches happen, the cost isn’t measured in fines or headlines alone - it’s measured in lost trust. I recently spoke with a healthcare executive who shared a haunting story: after a data breach, patients stopped using their app - not because they didn’t need the service, but because they no longer felt safe. 𝐓𝐡𝐢𝐬 𝐢𝐬𝐧’𝐭 𝐣𝐮𝐬𝐭 𝐚𝐛𝐨𝐮𝐭 𝐝𝐚𝐭𝐚. 𝐈𝐭’𝐬 𝐚𝐛𝐨𝐮𝐭 𝐩𝐞𝐨𝐩𝐥𝐞’𝐬 𝐥𝐢𝐯𝐞𝐬 - 𝐭𝐫𝐮𝐬𝐭 𝐛𝐫𝐨𝐤𝐞𝐧, 𝐜𝐨𝐧𝐟𝐢𝐝𝐞𝐧𝐜𝐞 𝐬𝐡𝐚𝐭𝐭𝐞𝐫𝐞𝐝. Consider the October 2023 incident at 23andMe: unauthorized access exposed the genetic and personal information of 6.9 million users. Imagine seeing your most private data compromised. At Deloitte, we’ve helped organizations turn privacy challenges into opportunities by embedding trust into their AI strategies. For example, we recently partnered with a global financial institution to design a privacy-by-design framework that not only met regulatory requirements but also restored customer confidence. The result? A 15% increase in customer engagement within six months. 𝐇𝐨𝐰 𝐜𝐚𝐧 𝐥𝐞𝐚𝐝𝐞𝐫𝐬 𝐫𝐞𝐛𝐮𝐢𝐥𝐝 𝐭𝐫𝐮𝐬𝐭 𝐰𝐡𝐞𝐧 𝐢𝐭’𝐬 𝐥𝐨𝐬𝐭? ✔️ 𝐓𝐮𝐫𝐧 𝐏𝐫𝐢𝐯𝐚𝐜𝐲 𝐢𝐧𝐭𝐨 𝐄𝐦𝐩𝐨𝐰𝐞𝐫𝐦𝐞𝐧𝐭: Privacy isn’t just about compliance. It’s about empowering customers to own their data. When people feel in control, they trust more. ✔️ 𝐏𝐫𝐨𝐚𝐜𝐭𝐢𝐯𝐞𝐥𝐲 𝐏𝐫𝐨𝐭𝐞𝐜𝐭 𝐏𝐫𝐢𝐯𝐚𝐜𝐲: AI can do more than process data, it can safeguard it. Predictive privacy models can spot risks before they become problems, demonstrating your commitment to trust and innovation. ✔️ 𝐋𝐞𝐚𝐝 𝐰𝐢𝐭𝐡 𝐄𝐭𝐡𝐢𝐜𝐬, 𝐍𝐨𝐭 𝐉𝐮𝐬𝐭 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞: Collaborate with peers, regulators, and even competitors to set new privacy standards. Customers notice when you lead the charge for their protection. ✔️ 𝐃𝐞𝐬𝐢𝐠𝐧 𝐟𝐨𝐫 𝐀𝐧𝐨𝐧𝐲𝐦𝐢𝐭𝐲: Techniques like differential privacy ensure sensitive data remains safe while enabling innovation. Your customers shouldn’t have to trade their privacy for progress. Trust is fragile, but it’s also resilient when leaders take responsibility. AI without trust isn’t just limited - it’s destined to fail. 𝐇𝐨𝐰 𝐰𝐨𝐮𝐥𝐝 𝐲𝐨𝐮 𝐫𝐞𝐠𝐚𝐢𝐧 𝐭𝐫𝐮𝐬𝐭 𝐢𝐧 𝐭𝐡𝐢𝐬 𝐬𝐢𝐭𝐮𝐚𝐭𝐢𝐨𝐧? 𝐋𝐞𝐭’𝐬 𝐬𝐡𝐚𝐫𝐞 𝐚𝐧𝐝 𝐢𝐧𝐬𝐩𝐢𝐫𝐞 𝐞𝐚𝐜𝐡 𝐨𝐭𝐡𝐞𝐫 👇 #AI #DataPrivacy #Leadership #CustomerTrust #Ethics
-
I was asked recently to compile an overview of the partnerships landscape in payments and speed of onboarding was something that I came back to a few times as I unpacked some of the biggest drivers for success. Speed of activated MIDs serves as a competitive differentiator for providers. Payments companies that can onboard merchants, platforms, and partners swiftly are emerging as winners in the market. So which providers made my shortlist after speaking to merchants and partners over the past 12 months on this subject?👇 1. Stripe 👉 Why? Stripe’s API-first approach, pre-built onboarding flows (e.g. Stripe Connect), and modular compliance tools that allow platforms to onboard merchants in minutes. 2. Adyen 👉 Why? Adyen offers a unified payments platform that integrates KYC and onboarding into a single, global process, reducing friction for enterprise merchants. 3. PayPal (including Braintree & Hyperwallet) 👉 Why? PayPal’s legacy in fast consumer onboarding extends to merchants via Braintree and Hyperwallet, providing quick payouts and simplified compliance. 4. Checkout.com 👉 Why? Checkout.com has invested heavily in onboarding efficiency, leveraging AI and automation for faster KYC and compliance reviews. 5. Rapyd 👉 Why? Rapyd’s fintech-as-a-service model enables businesses to onboard merchants in multiple jurisdictions quickly, handling local compliance seamlessly. 6. Square 👉 Why? Square’s ecosystem is built for quick and easy merchant sign-ups, with minimal manual verification for lower-risk merchants. 7. Cashflows 👉 Why? Much like Square, Cashflows has built an automated boarding tool for low-risk merchants, allowing them to activate MIDs fast for their direct customer and partner network. 8. Nomupay 👉 Why? Newer entrants to the market such as Nomupay obtain the advantage of knowing how important fast onboarding is when building technology stacks out ready for launch. 9. Mollie 👉 Why? A prominent player in the digital agency space, Mollie works closely with large volumes of retail merchants that expect tight turnarounds within minute dev sprint windows. How Are These Providers Achieving Fast Onboarding? ➡️ Automation & AI in KYC. Real-time identity verification, AI-driven fraud detection, and automated compliance checks reduce delays. ➡️ API-First & Low-Code Integration. Pre-built flows (e.g., Stripe Connect, Adyen’s onboarding API) help platforms integrate payments faster. ➡️ Pre-Vetted Risk Tiers. Many providers segment merchants into different risk levels, allowing lower-risk merchants to onboard almost instantly. ➡️ Embedded Compliance. Instead of a one-size-fits-all compliance process, leading providers integrate regulatory requirements dynamically based on merchant location and business type. Payments partnerships are evolving fast, and speed of onboarding is no longer a nice-to-have, it’s a make-or-break factor. The race for faster, frictionless onboarding will only intensify in 2025.
-
How To Handle Sensitive Information in your next AI Project It's crucial to handle sensitive user information with care. Whether it's personal data, financial details, or health information, understanding how to protect and manage it is essential to maintain trust and comply with privacy regulations. Here are 5 best practices to follow: 1. Identify and Classify Sensitive Data Start by identifying the types of sensitive data your application handles, such as personally identifiable information (PII), sensitive personal information (SPI), and confidential data. Understand the specific legal requirements and privacy regulations that apply, such as GDPR or the California Consumer Privacy Act. 2. Minimize Data Exposure Only share the necessary information with AI endpoints. For PII, such as names, addresses, or social security numbers, consider redacting this information before making API calls, especially if the data could be linked to sensitive applications, like healthcare or financial services. 3. Avoid Sharing Highly Sensitive Information Never pass sensitive personal information, such as credit card numbers, passwords, or bank account details, through AI endpoints. Instead, use secure, dedicated channels for handling and processing such data to avoid unintended exposure or misuse. 4. Implement Data Anonymization When dealing with confidential information, like health conditions or legal matters, ensure that the data cannot be traced back to an individual. Anonymize the data before using it with AI services to maintain user privacy and comply with legal standards. 5. Regularly Review and Update Privacy Practices Data privacy is a dynamic field with evolving laws and best practices. To ensure continued compliance and protection of user data, regularly review your data handling processes, stay updated on relevant regulations, and adjust your practices as needed. Remember, safeguarding sensitive information is not just about compliance — it's about earning and keeping the trust of your users.
-
Customer Experience Trends for 2025: The Rise of AI Agents Sam Altman predicts we’re only a few thousand days away from Artificial General Intelligence (AGI)—intelligence comparable to a human co-worker, capable of excelling across fields. He envisions AI evolving through five stages: from conversational chatbots (Level 1) to systems that can act autonomously (Level 3) and even perform the work of entire organizations (Level 5). Today, we’re moving into Level 2 with reasoning models like OpenAI’s o1, which can tackle complex problems. But the real game-changer lies ahead: **AI agents**—autonomous systems that do more than answer questions; they take action. Imagine planning a road trip in Australia. Instead of juggling websites for flights, hotels, and activities, your AI agent could handle everything seamlessly—and even reschedule your dinner if you're running late. Big players are already moving fast: Microsoft, Salesforce, Google, and Nvidia are all working on tools to build and deploy AI agents. This evolution could redefine customer experience (CX), eliminating friction and creating hyper-personalized interactions. For companies, the challenge is clear: don’t view AI agents as just a productivity booster. The key question is, **“How can this technology make our customers happier?”** Those who focus solely on efficiency will miss the bigger picture. The potential for AI agents to transform CX is massive. But the winners will be those who use it not just to save time or cut costs but to deliver moments of delight.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development