Your hospital's executive assistant just recorded your entire patient consultation meeting on Otter. ✅ PHI discussed? Check. ✅ Treatment plans mentioned? Check. ✅ Patient names dropped? Check. 🛑 Consent asked? Nope. And now that recording is sitting in Otter's cloud, training their #AI model on your patients' protected #health information. "But Soribel, we're a HOSPITAL! We would NEVER violate HIPAA!" Yeah...except you just did. And you didn't even know it. 👼 That cute little AI notetaker everyone's bringing to meetings? The one that auto-sends summaries to everyone's email? That's not a productivity tool in healthcare. That's a HIPAA violation factory. Here's what's happening RIGHT NOW at your hospital: Your exec assistant is using Fireflies to record leadership meetings where patient cases get discussed. Your research team is using Fathom to capture collaboration sessions with external universities. Your care coordinators are using Otter to document care transition planning calls. And NONE of them asked: "Where does this PHI go?" "Can patients request deletion? "Did we even get consent?" "Ok Soribel but it's just meeting notes! We're trying to be efficient!" Cool. Efficiency doesn't matter when OCR comes knocking. Remember those HIPAA fines? Yeah. They start at $100 per violation. PER DAY. And go up to $1.5 million annually per violation category. One meeting with 10 patients discussed = 10 violations. Do that weekly for a year = 520 violations. You do the math. My friends, meeting recorders are the new shadow IT. Except the legal risk is 10x higher because it's not just YOUR data. It's PATIENT data. You need a meeting recording policy. Like, yesterday. Here's what it needs to include: 🛑 NEVER record meetings with PHI unless absolutely necessary ✅ Approved tools ONLY (with signed BAAs - Business Associate Agreements) 📋 Consent requirements (from ALL participants, including patients if applicable) 🗄️ Data retention limits (how long before auto-delete) 🔐 Access controls (who can view recordings and transcripts) 🚣♀️ Without this? Your team is winging it with protected health information. The AI vendors selling you these tools? They won't tell you this. They're too busy bragging about their "healthcare customers." But if you're letting staff use AI meeting recorders without a clear policy on PHI handling? You're one patient complaint away from an expensive wake-up call. At a $2 billion hospital system I spoke with recently, they couldn't even tell me which meeting recording tools their 12,000+ employees were using. They just knew "it was happening." That's not AI governance. That's AI chaos. Clarity = speed. Confusion = HIPAA violations. Need help auditing which meeting recorders are being used in your hospital and building a policy that doesn't kill productivity? DM me. Mandatory pic for the algo. #AIGovernance #AIStrategy #HealthcareAI #HIPAA #HealthIT #algorithmsarepersonal
Managing Data Privacy Risks in Meeting Software
Explore top LinkedIn content from expert professionals.
Summary
Managing data privacy risks in meeting software means identifying and addressing the ways confidential information can be exposed or mishandled when meetings are recorded or transcribed using digital tools, especially those powered by AI. This involves understanding how these tools store, share, and use data, and ensuring compliance with privacy laws to protect sensitive information from unauthorized access.
- Review tool policies: Always check the privacy settings and terms of service for any meeting software to make sure your data isn’t being used for AI training or stored without your consent.
- Establish clear guidelines: Set specific rules about when meetings can be recorded, which tools are approved, and what types of information can be shared or transcribed.
- Ensure participant consent: Make it a habit to inform and get agreement from everyone involved before recording or transcribing a meeting, especially if personal or sensitive data is discussed.
-
-
Are you risking your company’s IP and customer personal data for the convenience of meeting transcription? Convenience is great, but not at the cost of accidentally donating your crown-jewel knowledge and customer personal data to someone else’s AI lab. AI-powered meeting transcription services are becoming increasingly popular - they offer so much convenience, sometimes even for free. I spent a few days combing through the actual Privacy Policies and Terms of Service for four popular AI notetakers—Otter.ai, Read.ai, Fireflies.ai, and tl;dv—to see whether they train their models on your conversations. I have no association with any of them, but what I found is worrying. Here’s the short version: 🔹 Otter.ai – On by default. Otter trains its speech-recognition models on 'de-identified' audio and text of your conversations. They claim that personal identifiers are stripped, but your confidential data still fuels their AI unless you negotiate a restriction. 🔹 Read.ai – Your choice. By default your data is not used. If you opt in to its Customer Experience Program, your transcripts can help improve the product. 🔹 Fireflies.ai – Aggregated-only. They forbid training on identifiable content, limiting themselves to anonymised usage statistics. No individual transcript feeds their AI. 🔹 tl;dv – Never. They explicitly prohibit using customer recordings for model training. Transcript snippets sent to their AI engine are anonymised, sharded, and not retained. Why it matters: Even “de-identified” data can leak competitive IP or sensitive customer information if models are ever breached or repurposed. Business recordings can contain personal data, meaning you’re still on the hook for consent, minimisation, and transfer safeguards. Your management, board and clients may assume you’ve locked this down; finding out later is awkward at best, non-compliant at worst. By the way - true anonymisation of data is exceptionally difficult, especially in complex data like speech. Claims that only 'deidentified' data is used for training needs to be scrutinised. Not one of the products reviewed provided any meaningful technical information about how they achieve this. What to do next: 1. Read the legal docs—marketing pages are full of assurances, but they don’t tell the full story. Read the privacy policies and terms of service. 2. Decide your red line: zero training, aggregated-only, or opt-in? 3. Configure or negotiate: most vendors offer enterprise DPAs or private-cloud options if you ask. 4. Review the consent flows: it’s not just your rights—your guests’ data is in play too. Have you asked the meeting participants if they are happy to hand their personal data and IP to a third party? Convenience is great, but not at the cost of accidentally donating your crown-jewel knowledge to someone else’s AI lab. I write about Doing AI Governance for real at ethos-ai.org. Subscribe for free analysis and guidance: https://ethos-ai.org #AIGovernance
-
Shadow AI is the unauthorized use of AI by employees. Shadow AI is no longer just about chatbots. It now includes AI transcription tools, meeting assistants, note-takers, coding copilots, and data analysis platforms used outside approved systems. Employees rely on these tools to move faster - recording calls, summarizing meetings, drafting content, and processing internal data. The efficiency gains are real. So is the risk. A recent survey found that 43% of AI users admit to sharing sensitive company information with AI tools without employer knowledge. This is not fringe behavior - it is happening across organizations. Where the Risk Shows Up Unauthorized transcription and recording tools can create immediate exposure. Employees may record and transcribe calls without proper consent or approval. Even when recording is lawful, companies often lack control over where transcripts are stored or how long they are retained. Confidential and proprietary data is also at risk. Employees may input product plans, pricing, source code, or strategy into AI tools. Once submitted, that information may be stored, reused, or accessed outside the company’s control. Customer and employee data presents another layer of exposure. AI tools frequently process personal data, and unapproved use can create privacy, contractual, and regulatory risk. There is also decision-making risk. AI-generated summaries or analyses may be incomplete or inaccurate. If relied on without validation, they can affect business outcomes. Finally, uncontrolled records are a growing concern. Transcripts, summaries, and AI-generated outputs often sit outside official systems. In litigation or investigations, these “shadow records” can surface unexpectedly and create inconsistencies. A Simple Example An employee uses a free AI transcription tool to record a customer call and generate a summary. The tool stores the transcript externally with unclear retention terms. The employee then uses a chatbot to refine follow-up messaging, including pricing details. The company now faces layered risk: recording issues, uncontrolled sensitive data, and records outside its governance framework. What Companies Should Do Companies should not try to eliminate AI use - they should control it. That starts with identifying what tools employees are already using and why. Organizations should provide approved, secure alternatives that align with business needs and include clear data protections and retention controls. Policies must define what data can be used with AI tools and when recording or transcription is permitted. Employees should be trained on practical risks, not just policies. Most importantly, companies need clear accountability over who can approve tools and how AI use is monitored. The Bottom Line If companies do not control how these tools are used - they lose control over their data, their records, and ultimately their risk.
-
🎙️🤖Business and internal meetings are increasingly dynamic, information-dense and fast-paced. Many employees struggle to take accurate notes while actively participating in discussions, which makes recording conversations or using #AI based voice transcription tools an increasingly tempting solution. These technologies promise efficiency and better knowledge retention - but they also raise significant GDPR compliance risks, as recently highlighted by European data protection authorities. The Datu valsts inspekcija/Data State Inspectorate of Latvia has clarified that meeting recordings almost always involve personal data, since they capture identifiable individuals’ statements, opinions and behaviour. Where an employee independently decides to record a meeting and determines why and how the recording will be used, that employee effectively becomes a data controller and must respect data protection principles. The DPA distinguishes three common scenarios. If recording is made solely for strictly private use - for example as a personal memory aid - and is not shared or reused, the GDPR may not apply under the household exemption. Covert recording may be justified only in exceptional circumstances, such as documenting harassment or other unlawful conduct, and only where it is necessary, proportionate and the only realistic way to obtain evidence. Even then, only the minimum necessary material should be disclosed and the seriousness of the wrongdoing must outweigh the privacy rights of those recorded. By contrast, recordings made for work-related purposes, such as preparing minutes or sharing information internally, always fall under the GDPR and require transparency - secret recording is not permissible. A complementary perspective comes from Agencia Española de Protección de Datos - AEPD , which has analysed the growing use of AI voice transcription systems in professional contexts. As a general rule, a person’s voice constitutes personal data where it can identify an individual directly or indirectly, particularly when combined with metadata such as IP addresses, call logs, application usage data or contextual information. The AEPD highlights that transcription services often involve multiple processing purposes: producing transcripts (e.g. meeting minutes) and, in many cases, reusing voice data to retrain AI models. Where providers reuse such data for system development, they typically act as independent controllers with their own legal basis. Organisations deploying transcription tools must exercise due diligence in selecting providers and full compliance with Article 28 GDPR. For organisations, this creates a concrete governance and risk-management issue. Addressing them requires clear internal rules covering when recordings are permitted, how AI tools may be used, transparency towards participants, provider due diligence, secondary data uses and employees’ rights #gdpr #rodo
-
Most leaders ask the wrong question when evaluating AI meeting assistants. They ask: “How accurate is it?” When the question they should be asking is: “How safe is it?” Accuracy matters, but if your AI notetaker is training on your private 1:1s, roadmap discussions, or performance reviews, the risk isn’t worth the convenience. That’s why we built the AI Meeting Assistant Security Checklist: a practical guide to help you audit your current (or future) tools and protect your company’s most sensitive conversations. Inside, you’ll learn: • The must-have compliance frameworks (SOC 2, HIPAA, ISO 27001) • Data retention and deletion policies to demand • Admin controls and usage rules to enforce company-wide security Security isn’t a nice-to-have. It’s step one. Download the checklist here: https://lnkd.in/gPgEfmUZ
-
The AI notetaker on your last call? It might already be a compliance problem. Here's what's happening right now, and what you need to know. Regulators in the EU, UK, and US are all moving at the same time on AI meeting tools. And the pace is faster than most teams realise. In the EU, Article 50 of the AI Act requires organisations to clearly label AI-generated content so people know they're not reading something a human wrote. That enforcement window opens around August 2026. Which sounds far away until you realise your organisation's templates, workflows, and vendor agreements haven't changed yet. In the UK, the ICO is clear. If your transcription tool identifies who's speaking, it's processing biometric data. That means you need a lawful basis. "We didn't think about it" is not a lawful basis. In the US, it's messier. One-party consent covers most states, but California and Illinois require everyone's agreement before you hit record. And there's a wave of class action litigation building around AI tools that identify speakers, arguing those voiceprints are biometric data under Illinois' BIPA law. These cases are not hypothetical. They're in court now. What does this mean for you as an assistant? You are often the person who sets up the meeting. Books the room. Sends the calendar invite. Turns on the recording. Which means you're also the person who needs to understand what that recording does once it exists. You don't need to be a lawyer to manage this risk. You need to know the seven steps, and you need to brief your principal before they find out about this the hard way. The tools haven't changed. The regulations around them have. Know the rules. Protect your executive. That's what strategic looks like.
-
🚨 Otter.ai is being sued in California, and the outcome could change how AI meeting tools handle privacy. A class-action lawsuit claims that Otter’s “Notetaker” bot recorded and transcribed Zoom meetings without consent, even when some participants were not Otter users. The recordings were also said to be used to train its speech recognition models, raising questions under federal wiretap laws and California’s all-party consent rules. Here is what is at stake: 📜 The complaint cites several laws, including ECPA, CIPA, and CFAA, as well as state privacy and competition claims. 👥 Otter has more than 25 million users and has transcribed over a billion meetings, so the potential impact is huge. 💰 Under California law, damages could reach $5,000 per consumer. The bigger picture goes beyond one company: 1. Consent is not just a box to tick. Courts will not accept fine print if people did not truly agree. 2. Silence is risky. If participants do not know they are being recorded, trust is lost. 3. Using recordings for AI training is not harmless. Even de-identified data can often be traced back. 4. The financial and reputational risks are serious. A class action on this scale could be devastating. 5. This is part of a wider trend. Apple recently paid $95 million over Siri recordings, and more lawsuits are on the horizon. ⚖️ The takeaway: AI companies need to put consent and transparency first. That means clear notifications, documented agreement from all parties, and privacy built into the design. 👉 Do you think this case will finally push AI vendors to put consent before convenience?
-
“𝗘𝗮𝗰𝗵 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁 𝘄𝗶𝗹𝗹 𝗱𝗲𝗰𝗶𝗱𝗲 — 𝗮𝗻𝗱 𝗮𝗰𝘁 — 𝗼𝗻 𝗶𝘁𝘀 𝗼𝘄𝗻.” That is how SoftBank 𝗳𝗼𝘂𝗻𝗱𝗲𝗿 Masayoshi Son described a future where AI agents don’t just recommend… they 𝘰𝘱𝘦𝘳𝘢𝘵𝘦: coordinating across HR, finance, sales, and even intervening in systems without waiting for humans. We’re moving on that direction. But here’s the uncomfortable truth! 𝗬𝗼𝘂𝗿 “𝗔𝗜 𝗻𝗼𝘁𝗲-𝘁𝗮𝗸𝗲𝗿” 𝗶𝘀 𝗮𝗹𝗿𝗲𝗮𝗱𝘆 𝗮 𝗯𝗮𝗯𝘆 𝗮𝗴𝗲𝗻𝘁.. Not because it’s intelligent - but because it is connected. 𝗧𝗵𝗲 𝗿𝗶𝘀𝗸𝘀 𝘄𝗲’𝗿𝗲 𝗻𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗔𝗜 𝗻𝗼𝘁𝗲𝘁𝗮𝗸𝗲𝗿𝘀 𝗮𝗻𝗱 𝗼𝘁𝗵𝗲𝗿 𝘁𝗼𝗼𝗹𝘀 𝗮𝗿𝗲 1. 𝗖𝗼𝗻𝘀𝗲𝗻𝘁 𝘁𝗵𝗲𝗮𝘁𝗲𝗿 A browser extension can auto-record meetings, sometimes with only a small in-product notification. That is like wiretapping in the name of productivity ( laws vary by jurisdiction). 2. 𝗠𝗲𝗲𝘁𝗶𝗻𝗴 𝗹𝗶𝗻𝗸 𝗮𝗰𝗰𝗲𝘀𝘀 = 𝗽𝗲𝗿𝗶𝗺𝗲𝘁𝗲𝗿 𝗰𝗼𝗹𝗹𝗮𝗽𝘀𝗲 If your agent has an Auto-join based on calendar rules (including “join all events with a web conference link”) you’ve turned a URL into a data-exfil channel. 3. 𝗔 𝗡𝗲𝘄 𝗼𝘃𝗲𝗿𝗻𝗶𝗴𝗵𝘁 𝗱𝗮𝘁𝗮 𝗴𝗼𝗹𝗱𝗺𝗶𝗻𝗲 4. Voice → text + full transcripts + summaries create a single, searchable vault of strategy, pricing, M&A chatter, customer escalations, legal… everything. Guess what, this is what attackers want. 5. 𝗢𝗔𝘂𝘁𝗵 𝗯𝗹𝗮𝘀𝘁 𝗿𝗮𝗱𝗶𝘂𝘀 Calendar-connected tools are only as safe as the tokens + scopes you granted. If an identity is compromised which it will, the attacker doesn’t need to “hack the meeting.” They just log in and export history. You are done! 6. 𝗗𝗮𝘁𝗮 𝘀𝗽𝗿𝗮𝘄𝗹 (𝗦𝗹𝗮𝗰𝗸/𝗗𝗿𝗶𝘃𝗲/𝗖𝗥𝗠𝘀/𝗡𝗼𝘁𝗶𝗼𝗻) Those meeting recaps you receive in your Slack channels, as well as your transcripts/recordings that are auto-synced into Google Drive folders are today’s convenience. until one permissive channel/folder becomes your breach. 7. “𝗦𝗵𝗮𝗱𝗼𝘄 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲” 𝗱𝗲𝗹𝘂𝘀𝗶𝗼𝗻 Teams routinely pipe transcripts into other LLMs/workflows for summarization and task extraction. Now your meeting data is traveling to places your security team never approved. And these data will pop up somewhere completely unexpected, one day. 8. 𝗜𝗻𝘁𝗲𝗴𝗿𝗶𝘁𝘆 𝗿𝗶𝘀𝗸 (𝗻𝗼𝘁 𝗷𝘂𝘀𝘁 𝗰𝗼𝗻𝗳𝗶𝗱𝗲𝗻𝘁𝗶𝗮𝗹𝗶𝘁𝘆) Most of the time we just trust AI that its action points are set stone. But Bad summaries → bad decisions. Hallucinated action items, missed nuance, and “false certainty” become operational debt. To be clear: many of these platforms take security seriously (SOC 2, GDPR, etc.). But 𝗰𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝗰𝗲 𝗱𝗼𝗲𝘀𝗻’𝘁 𝘀𝗼𝗹𝘃𝗲 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲-𝗯𝘆-𝗱𝗲𝗳𝗮𝘂𝗹𝘁. Just because there is a paper saying something is secure does not mean that they are not penetrable 𝗠𝘆 𝗵𝗼𝘁 𝘁𝗮𝗸𝗲: The first “AI agent incident” most companies face won’t be a rogue super-agent… It’ll be a 𝗺𝗲𝗲𝘁𝗶𝗻𝗴 𝘁𝗿𝗮𝗻𝘀𝗰𝗿𝗶𝗽𝘁 in the wrong place. It is already happening.
-
Ten years ago, I remember discussing how smartphones made recording conversations easier for employees. That post seems ancient compared to today’s technology—like that iPod I saw in a museum (and pictured here). Now employers need to worry about devices like Plaud—sleek call recorders and AI note-takers—and Ray‑Ban Meta glasses, which record audio and video by tap or voice, simplify and enhance recording. And then there's the Meeting bots that auto-join Zoom or Teams and generate searchable transcripts make this worth reconsidering. Do employers just throw up their hands? No, instead, employers should set ground rules before recordings unexpectedly become central in disputes. Connecticut consent law overview Connecticut allows recording in-person conversations with one-party consent, but prohibits third-party recording without a participant’s consent. Telephonic and in-person recordings are treated differently. And employers have another law to follow regarding electronic monitoring employees. For example, employers cannot monitor restrooms, locker rooms, or lounges; must give written notice before electronic monitoring; and need consent from all parties to record contract negotiations. (There are some exceptions to ask your counsel about.) Video platforms blur categories but the best suggestion is to always get express consent from all participants before recording or using AI transcription. What should employers do? Here are a few suggestions to update your existing policies: - Adopt a modern, specific policy: ban secret recordings where legal, allow clear legal exceptions, and avoid broad language that could restrict protected labor activity. - Cross-reference your electronic monitoring notice so employees know when the company records. - Set expectations for important meetings. Establish guidelines on when employees can record conversations and whether their consent is needed. - Train managers to stay calm when encountering AI devices. Use a script: “Company policy prohibits recording meetings without consent; we are not recording and ask that you do not record.” - And make sure that for multi-state calls, you’re following the strictest consent rule and confirming consent in advance. Ultimately, having clear ground rules, meeting standards, and some manager training are sufficient. You don’t necessarily need to ban smart glasses or AI notes to manage risks in 2026, but reminding employees about what is and is not allowed is probably a good first step.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development