Otter.ai Enterprise Data Privacy Risks

Explore top LinkedIn content from expert professionals.

Summary

Otter.ai Enterprise Data Privacy Risks refer to the potential legal, ethical, and data protection challenges that arise when organizations use Otter.ai's AI-powered meeting transcription tools, especially concerning the collection, storage, and use of sensitive business and personal information without proper consent or safeguards. These risks are especially significant for industries handling confidential or regulated data, as Otter.ai may use recorded conversations to train its AI, sometimes without explicit approval from all participants.

  • Review consent policies: Always ensure that every meeting participant is informed and has agreed to any recording or transcription, especially in regulated sectors or regions with strict privacy laws.
  • Audit tool settings: Regularly check Otter.ai’s default settings and privacy controls to understand how your data is being used and whether recordings might be shared or used for AI training.
  • Establish clear guidelines: Create and communicate a company-wide policy that addresses which tools can be used, when recordings are allowed, and how sensitive data should be managed to avoid legal or ethical breaches.
Summarized by AI based on LinkedIn member posts
  • View profile for Soribel F.

    I Help Organizations Build AI Governance That Meets Regulatory Standards | AI Governance Compliance Advisor | CIPM, CIPP/E/US | LinkedIn Learning Instructor | Keynote Speaker | ex-Meta, ex-Microsoft | CFR Term-Member

    16,153 followers

    Your hospital's executive assistant just recorded your entire patient consultation meeting on Otter. ✅ PHI discussed? Check. ✅ Treatment plans mentioned? Check. ✅ Patient names dropped? Check. 🛑 Consent asked? Nope. And now that recording is sitting in Otter's cloud, training their #AI model on your patients' protected #health information. "But Soribel, we're a HOSPITAL! We would NEVER violate HIPAA!" Yeah...except you just did. And you didn't even know it. 👼 That cute little AI notetaker everyone's bringing to meetings? The one that auto-sends summaries to everyone's email? That's not a productivity tool in healthcare. That's a HIPAA violation factory. Here's what's happening RIGHT NOW at your hospital: Your exec assistant is using Fireflies to record leadership meetings where patient cases get discussed. Your research team is using Fathom to capture collaboration sessions with external universities. Your care coordinators are using Otter to document care transition planning calls. And NONE of them asked: "Where does this PHI go?" "Can patients request deletion? "Did we even get consent?" "Ok Soribel but it's just meeting notes! We're trying to be efficient!" Cool. Efficiency doesn't matter when OCR comes knocking. Remember those HIPAA fines? Yeah. They start at $100 per violation. PER DAY. And go up to $1.5 million annually per violation category. One meeting with 10 patients discussed = 10 violations. Do that weekly for a year = 520 violations. You do the math. My friends, meeting recorders are the new shadow IT. Except the legal risk is 10x higher because it's not just YOUR data. It's PATIENT data. You need a meeting recording policy. Like, yesterday. Here's what it needs to include: 🛑 NEVER record meetings with PHI unless absolutely necessary ✅ Approved tools ONLY (with signed BAAs - Business Associate Agreements) 📋 Consent requirements (from ALL participants, including patients if applicable) 🗄️ Data retention limits (how long before auto-delete) 🔐 Access controls (who can view recordings and transcripts) 🚣♀️ Without this? Your team is winging it with protected health information. The AI vendors selling you these tools? They won't tell you this. They're too busy bragging about their "healthcare customers." But if you're letting staff use AI meeting recorders without a clear policy on PHI handling? You're one patient complaint away from an expensive wake-up call. At a $2 billion hospital system I spoke with recently, they couldn't even tell me which meeting recording tools their 12,000+ employees were using. They just knew "it was happening." That's not AI governance. That's AI chaos. Clarity = speed. Confusion = HIPAA violations. Need help auditing which meeting recorders are being used in your hospital and building a policy that doesn't kill productivity? DM me. Mandatory pic for the algo. #AIGovernance #AIStrategy #HealthcareAI #HIPAA #HealthIT #algorithmsarepersonal

  • View profile for James Kavanagh

    Founder & CEO, AI Career Pro | Creator of the AI Governance Practitioner Program | Led Governance and Engineering Teams at Microsoft & Amazon

    9,805 followers

    Are you risking your company’s IP and customer personal data for the convenience of meeting transcription? Convenience is great, but not at the cost of accidentally donating your crown-jewel knowledge and customer personal data to someone else’s AI lab. AI-powered meeting transcription services are becoming increasingly popular - they offer so much convenience, sometimes even for free. I spent a few days combing through the actual Privacy Policies and Terms of Service for four popular AI notetakers—Otter.ai, Read.ai, Fireflies.ai, and tl;dv—to see whether they train their models on your conversations. I have no association with any of them, but what I found is worrying. Here’s the short version: 🔹 Otter.ai – On by default. Otter trains its speech-recognition models on 'de-identified' audio and text of your conversations. They claim that personal identifiers are stripped, but your confidential data still fuels their AI unless you negotiate a restriction. 🔹 Read.ai – Your choice. By default your data is not used. If you opt in to its Customer Experience Program, your transcripts can help improve the product. 🔹 Fireflies.ai – Aggregated-only. They forbid training on identifiable content, limiting themselves to anonymised usage statistics. No individual transcript feeds their AI. 🔹 tl;dv – Never. They explicitly prohibit using customer recordings for model training. Transcript snippets sent to their AI engine are anonymised, sharded, and not retained. Why it matters: Even “de-identified” data can leak competitive IP or sensitive customer information if models are ever breached or repurposed. Business recordings can contain personal data, meaning you’re still on the hook for consent, minimisation, and transfer safeguards. Your management, board and clients may assume you’ve locked this down; finding out later is awkward at best, non-compliant at worst. By the way - true anonymisation of data is exceptionally difficult, especially in complex data like speech. Claims that only 'deidentified' data is used for training needs to be scrutinised. Not one of the products reviewed provided any meaningful technical information about how they achieve this. What to do next: 1. Read the legal docs—marketing pages are full of assurances, but they don’t tell the full story. Read the privacy policies and terms of service. 2. Decide your red line: zero training, aggregated-only, or opt-in? 3. Configure or negotiate: most vendors offer enterprise DPAs or private-cloud options if you ask. 4. Review the consent flows: it’s not just your rights—your guests’ data is in play too. Have you asked the meeting participants if they are happy to hand their personal data and IP to a third party? Convenience is great, but not at the cost of accidentally donating your crown-jewel knowledge to someone else’s AI lab. I write about Doing AI Governance for real at ethos-ai.org. Subscribe for free analysis and guidance: https://ethos-ai.org #AIGovernance

  • View profile for Brett Colvin

    CEO and Co-founder at Goodlawyer | Building the most desirable way to practice law.

    35,324 followers

    What happens when a company shifts its legal duties onto its users? Otter AI is about to find out. Otter is being sued for secretly recording conversations on Zoom, Google Meet, and Microsoft Teams, without consent, and using it to train its AI. Here’s what Séamus M., an AI and data privacy lawyer at Goodlawyer, had to say: “In California, the law is straightforward: you require a GENUINE ‘yes’ to tape private conversations. Hiding that ‘yes’ in 30 pages of legalese isn't consent. It's an affront to user rights.” Consent from account holders is one thing – but what about everyone else in the meeting? A representative of Otter shared the following: “Our Terms of Service make clear that users are responsible for obtaining any necessary permissions before [initiating recordings].” Is Otter dodging responsibility by shifting its legal obligations onto its account holders? The lawsuit argues yes. And here’s why this matters, even if you’ve never used Otter: This case will ripple far beyond one company, and far beyond California. It’s about how an entire tech ecosystem balances innovation with responsibility. Back to Seamus: “It will determine the manner in which a notoriously avant-garde tech ecosystem reconciles innovation with responsibility.” This is exactly the kind of issue lawyers help businesses navigate — innovation with integrity. Do you think “buried in the Terms of Service” counts as real consent?

  • View profile for Luiza Jarovsky, PhD
    Luiza Jarovsky, PhD Luiza Jarovsky, PhD is an Influencer

    Co-founder of the AI, Tech & Privacy Academy (1,400+ participants), Author of Luiza’s Newsletter (94,000+ subscribers), Mother of 3

    131,281 followers

    🚨 BREAKING: An extremely important lawsuit in the intersection of PRIVACY and AI was filed against Otter over its AI meeting assistant's lack of CONSENT from meeting participants. If you use meeting assistants, read this: Otter, the AI company being sued, offers an AI-powered service that, like many in this business niche, can transcribe and record the content of private conversations between its users and meeting participants (who are often NOT users and do not know that they are being recorded). Various privacy laws in the U.S. and beyond require that, in such cases, consent from meeting participants is obtained. The lawsuit specifically mentions: - The Electronic Communications Privacy Act; - The Computer Fraud and Abuse Act; - The California Invasion of Privacy Act; - California’s Comprehensive Computer Data and Fraud Access Act; - The California common law torts of intrusion upon seclusion and conversion; - The California Unfair Competition Law; As more and more people use AI agents, AI meeting assistants, and all sorts of AI-powered tools to "improve productivity," privacy aspects are often forgotten (in yet another manifestation of AI exceptionalism). In this case, according to the lawsuit, the company has explicitly stated that it trains its AI models on recordings and transcriptions made using its meeting assistant. The main allegation is that Otter obtains consent only from its account holders but not from other meeting participants. It asks users to make sure other participants consent, shifting the privacy responsibility. As many of you know, this practice is common, and various AI companies shift the privacy responsibility to users, who often ignore (or don't know) what national and state laws actually require. So if you use meeting assistants, you should know that it's UNETHICAL and in many places also ILLEGAL to record or transcribe meeting participants without obtaining their consent. Additionally, it's important to have in mind that AI companies might use this data (which often contains personal information) to train AI, and there could be leaks and other privacy risks involved. - 👉 Link to the lawsuit below. 👉 Never miss my curations and analyses on AI's legal and ethical challenges: join my newsletter's 74,000+ subscribers. 👉 To learn more about the intersection of privacy and AI (and many other topics), join the 24th cohort of my AI Governance Training in October.

  • View profile for Cecilia Ziniti

    CEO & Co-Founder, GC AI | General Counsel and CLO | Host of CZ & Friends Podcast

    23,521 followers

    👀 So, is opt-out + relying on your users to get others' consent, enough? Big AI + privacy case to watch: Otter.ai faces a proposed class action in California federal court. Important read for product counsel because the allegations go deep on the user experience of the Otter software - the complaint includes screenshots of the look and wording of the consent screens and even compares Otter to other providers' consent mechanics. The complaint alleges that Otter Notetaker, the AI-powered meeting assistant, records and accesses private conversations *without* consent from all meeting participants… and then uses those recordings to train its models. The allegation is that in two-party consent states like California, taking and using recordings without consent is a wiretap. That raises the question: who’s accountable for consent and data use, the vendor or the business user? Can terms of service be meaningful notice? What could go wrong? See an example below - the meeting recorder stayed in the meeting _after_ its owner left the call. Spoiler: that conversation killed their deal. Some key quotes from the complaint: 💬 “When default account settings are used, Otter does not send a pre-meeting invitation or notification to obtain consent from meeting participants. Instead, Otter accountholders must toggle this setting 'On' for it to apply to pre-scheduled meetings." 💬 "In effect, Otter tries to shift responsibility, outsourcing its legal obligations to its accountholders, rather than seeking permission and consent from the individuals Otter records, as required by law." What do you think? Are vendors doing enough on consent, or will the burden keep falling on legal teams? PS - interesting from a product counsel perspective PPS: we designed GC AI with opt-in and confidentiality from the start. We have zero data retention with our model providers, and information inputted remains privileged. #legalAI #GCAI #otterai #otterlawsuit #productcounsel

Explore categories