If you are an organisation using AI or you are an AI developer, the Australian privacy regulator has just published some vital information about AI and your privacy obligations. Here is a summary of the new guides for businesses published today by the Office of the Australian Information Commissioner which articulate how Australian privacy law applies to AI and set out the regulator’s expectations. The first guide is aimed to help businesses comply with their privacy obligations when using commercially available AI products and help them to select an appropriate product. The second provides privacy guidance to developers using personal information to train generative AI models. GUIDE ONE: Guidance on privacy and the use of commercially available AI products Top five takeaways * Privacy obligations will apply to any personal information input into an AI system, as well as the output data generated by AI (where it contains personal information). * Businesses should update their privacy policies and notifications with clear and transparent information about their use of AI * If AI systems are used to generate or infer personal information, including images, this is a collection of personal information and must comply with APP 3 (which deals with collection of personal info). * If personal information is being input into an AI system, APP 6 requires entities to only use or disclose the information for the primary purpose for which it was collected. * As a matter of best practice, the OAIC recommends that organisations do not enter personal information, and particularly sensitive information, into publicly available generative AI tools. GUIDE 2: Guidance on privacy and developing and training generative AI models Top five takeaways * Developers must take reasonable steps to ensure accuracy in generative AI models. * Just because data is publicly available or otherwise accessible does not mean it can legally be used to train or fine-tune generative AI models or systems.. * Developers must take particular care with sensitive information, which generally requires consent to be collected. * Where developers are seeking to use personal information that they already hold for the purpose of training an AI model, and this was not a primary purpose of collection, they need to carefully consider their privacy obligations. * Where a developer cannot clearly establish that a secondary use for an AI-related purpose was within reasonable expectations and related to a primary purpose, to avoid regulatory risk they should seek consent for that use and/or offer individuals a meaningful and informed ability to opt-out of such a use. https://lnkd.in/gX_FrtS9
Copilot Data Privacy Guidelines for Businesses
Explore top LinkedIn content from expert professionals.
Summary
Copilot data privacy guidelines for businesses help organizations manage and protect sensitive information when using AI tools like Microsoft Copilot, ensuring compliance with privacy regulations and reducing the risk of accidental data exposure. These guidelines outline best practices for handling personal and confidential data, especially in environments where AI processes or summarizes business content.
- Set sharing controls: Limit who can create and share Copilot agent links to prevent sensitive data from reaching unintended audiences and reduce accidental leaks.
- Update privacy policies: Regularly review and revise your privacy documents to clearly explain how AI tools are used and how personal information is handled within your company.
- Audit and classify data: Check which employees are using Copilot, classify sensitive files, and use data loss prevention tools to block AI from processing confidential content in summaries.
-
-
Did You Know? You can stop sensitive data from being summarized by Microsoft 365 Copilot without blocking access to the files themselves? Here’s the scenario: Your organization just rolled out Microsoft 365 Copilot to help employees work smarter. Everyone loves it—until someone realizes Copilot could summarize documents labeled “Personal” or “Highly Confidential” in its responses. That’s a compliance nightmare waiting to happen. So, what can you do? Use Microsoft Purview Data Loss Prevention (DLP) with the new Microsoft 365 Copilot policy location. Create a DLP policy that says: - If Content contains > Sensitivity labels = Personal or Highly Confidential -Action: Prevent Copilot from processing the content Now, those files can still appear in citations (because users already have permission), but Copilot won’t use the content in summaries. ✅ Why this matters: -Protects sensitive data without breaking productivity -Works across Microsoft 365 apps and Copilot Chat -Supports alerts, notifications, and simulation mode 💡 Pro Tip: Updates to DLP policies can take up to 4 hours to reflect in Copilot. Plan ahead for rollouts! #Microsoft365 #Copilot #DataLossPrevention #MicrosoftPurview #Compliance #Security #AI #InformationProtection #M365Security #DataGovernance
-
Microsoft is adding a new tenant-level control in the Microsoft 365 admin center that lets IT choose who can create org-wide sharing links for Copilot agents built in Copilot Studio. This matters because agent links can spread fast and expose sensitive prompts and data outside the right audience. With this control, you can lock down sharing before issues happen, align access with policy, and reduce the chance of accidental data leaks. You’ll find it under Copilot > Settings > Data access > Agents. Admins get three clear choices: allow all users to create sharing links, block everyone, or allow only specific users and security groups (role-based). By default, nothing changes until you set it. Practical moves: map the “allowed” list to existing security groups, run a small pilot, and audit current agents for risky sharing. Pair the change with a simple user guide on when to share, who to share with, and how to request access—then document exceptions so help desk isn’t guessing. Rollout starts mid-September 2025, with worldwide completion expected by late September. Use the runway to review your AI governance plan, choose your default (deny by default is safest), and update change-management notes. Also make sure your identity hygiene is solid since this ties into your Azure Active Directory setup. Bottom line: this is a quick, high-impact win for keeping Copilot agents controlled and your data where it belongs. #Microsoft365 #Copilot #Security #ChangeYourPassword Follow me for practical Microsoft 365 admin tips and real-world Copilot governance updates.
-
Microsoft just sent GitHub Copilot users a polite email. Buried in the friendly tone: starting April 24, your code inputs, outputs, snippets, and context will be used to train AI models. Unless you opt out. Not opt in. Opt out. For most developers, this is a minor annoyance. For regulated industries, it is a five-alarm fire nobody is hearing yet. If your dev team at a bank or credit union is using Copilot, every code snippet touching core banking logic, API integrations, or internal workflows could end up as training data for models you do not control. Think about that for a second. Your proprietary business logic, potentially feeding a model that serves your competitors. And "aligns with established industry practices" is doing a lot of heavy lifting in that email. Translation: everyone else is doing it, so you should be fine with it too. Here is what concerns me most. How many IT leaders at financial institutions even know their developers are using Copilot? How many compliance officers have AI-powered coding tools on their vendor risk register? If the answer is "I'm not sure," you have a governance gap that no exam prep can fix after the fact. Three things to do before April 24: 1. Audit whether anyone in your organization is using GitHub Copilot. 2. Review and update your GitHub account settings to opt out if you have not already. 3. Add AI-powered developer tools to your vendor management and acceptable use policies. Today. The tool itself is not the problem. Copilot is genuinely useful. The problem is defaulting into data sharing in environments where data governance is not optional. Your regulators will not accept "we didn't know it was turned on" as an answer.
-
A Quick Plan/Approach For CISO’s to Address AI Fast. As a CISO/CEO you have to stay on top of new ideas, risks and opportunities to grow and protect the business. As we all keep hearing and seeing LLM/AI usage is increasing every day. This past week my inbox is full of one question How do I actually protect my company's data when using AI tools? Over the last 9 years I have been working on, involved with and creating LLM/AI cyber and business programs and as a CISO I have been slowly integrating ideas about AI/cyber operations, data protection and business. Here are five AI privacy practices that I have found that really work. I recommend to clients, partners and peers. I group them into three clear areas: Mindset, Mechanics, and Maintenance. 1. Mindset: Build AI Privacy Into the Culture Privacy isn't just a checklist, it's a behavior. Practice #1: Treat AI like a junior employee with no NDA. Before you drop anything into ChatGPT, Copilot, or any other AI tool, stop and ask: Would I tell this to a freelancer I just hired five minutes ago? That's about the level of control you have once your data is in a cloud-based AI system. This simple mental filter keeps teams from oversharing sensitive client or company info. Practice #2: Train people before they use the tool, not after. Too many companies slap a "responsible AI use" policy into the employee handbook and call it a day. That's no good. Instead, run short, focused training on how to use AI responsibly specially around data privacy. 2. Mechanics: Make Privacy Part of the System Practice #3: Use privacy-friendly AI tools or self-host when possible. Do your research. For highly sensitive work, explore open-source LLMs or self-hosted solutions like private GPTs or on-prem language models. It's a heavier lift but you control the environment. Practice #4: Classify your data before using AI. Have a clear, documented data classification policy. Label what's confidential, internal, public, or restricted, and give guidance on what can and can't be included in AI tools. Some organizations embed DLP tools into browser extensions or email clients to prevent slip-ups. 3. Maintenance: Keep It Tight Over Time Practice #5: Audit AI usage regularly. People get busy. Policies get ignored. That's why you need a regular cadence quarterly is a good place to start where you review logs, audit prompts and check who's using what. AI is evolving fast, and privacy expectations are only getting tighter. What other ways are you using LLM/AI in your organization?
-
Your data is already talking. You just don’t hear it yet. Microsoft Copilot is powerful. But without control, it becomes dangerous. One prompt can expose sensitive information. Most companies enable Copilot quickly. Very few secure it first. Here’s how to lock it down: 1. Audit Copilot access → Check who actually needs access → Remove access from unnecessary roles 2. Restrict sensitive file permissions → Review SharePoint and OneDrive sharing → Limit access to confidential folders 3. Clean outdated shared folders → Delete unused legacy documents → Remove old public sharing links 4. Apply data loss prevention policies → Block copying sensitive company data → Set alerts for risky data movement 5. Monitor prompts and activity → Track how employees use Copilot → Identify unusual or risky queries early 6. Train employees before rollout → Teach safe prompting habits → Explain what should never be shared Copilot reads what users can access. Not what they should access. Security is not an IT task. It is a leadership decision. Before scaling AI inside your company, fix what AI can already see. 🔄 Repost for more like this Follow for posts on CyberSecurity, Data Governance & AI, Modern workspace
-
🔍Ever seen Copilot in Microsoft 365 return results from personal or sensitive data that you didn't want it to access? Learn how to safeguard your confidential information by leveraging the "BlockContentAnalysisServices" parameter within Microsoft 365 sensitivity labels. - Without Protection: 🚨 Copilot can potentially expose confidential data by accessing and displaying document content when this parameter is not enabled. - With Protection: ✅ Enabling this parameter blocks Copilot from analyzing document content, ensuring the security of your sensitive data. Note that enabling this parameter could also impact other Microsoft 365 services that rely on content analysis, such as Data Loss Prevention (DLP), auto-labeling, and other connected experiences. This feature is crucial for organizations handling highly confidential data, ensuring compliance with strict regulations. By activating this parameter, it prevent AI tools like Copilot from accessing or analyzing sensitive content within your documents. For detailed guidance on configuring this setting, check out the official Microsoft documentation: https://lnkd.in/eFQZCNqQ
-
When it comes to Generative AI, organizations are extremely careful approaching security, compliance and auditing. When it comes to Microsoft 365 Copilot, we provide several layers of control with existing capabilities within Microsoft 365. There are 3 pillars: 1. How Microsoft 365 Copilot works with sensitivity labels and encryption: - Copilot works together with Microsoft Purview sensitivity labels and encryption to provide an extra layer of protection. - When the sensitivity label applies encryption, the user must have the EXTRACT and VIEW usage rights for Copilot to summarize the data. - Items encrypted by the Azure Rights Management service without a sensitivity label still require EXTRACT or VIEW usage rights for the user for Copilot to summarize the data. More details: https://lnkd.in/eDQFTGSZ 2. Oversharing controls organizations can use with Microsoft 365 Copilot - Microsoft 365 includes controls to help you prevent oversharing data through Copilot: Restricted SharePoint Search and built-in SPO controls, SharePoint Advanced Management (included with the M365 Copilot license) and Microsoft Purview Information Protection (Sensitivity Labels and DLP) More details: https://lnkd.in/e-4SCn2D 3. Where Copilot usage data is stored and how organizations can audit it - Copilot usage data is stored in several places. Organizations can use the tools provided with Microsoft 365 to discover, audit, and apply retention policies. - Use Microsoft Purview audit logs to identify how, when, and where Copilot interactions occurred and which items were accessed, including any sensitivity labels on those items. - Use Microsoft Purview eDiscovery to search for keywords in Copilot prompts and responses that might be inappropriate. Organizations can also include this info in an eDiscovery case to review, export, or put this data on hold for an ongoing legal investigation. - Use Microsoft Purview Communication Compliance to detect and alert inappropriate or risky Copilot prompts and responses, like personal data or highly confidential information. - Use Microsoft Purview retention policies to keep a copy of deleted Copilot conversations so they're available to eDiscovery. - If an organization has a compliance requirement to delete data after a specific period of time, they can use retention policies to automatically delete Copilot prompts and responses. More details: https://lnkd.in/ezu7KZTm
-
From my discussions with Enterprise customers about their AI transformation journey, one key theme I see emerging is: Copilot/Agent Governance! What used to be IT only problem, now takes a critical seat in all boardrooms. Trust is paramount in getting your AI transformation right within your enterprise. Common questions raised are – 📃 Oversharing: Are we sure Agent and Copilot are not exposing content that users are not supposed to see? 🈺 Business-critical sites: Are all our business-critical content excluded from organization wide view in Copilot and Search, only authorized users can view them by visiting those sites? This is where Microsoft 365 Copilot’s native governance capabilities, powered by #SharePoint Advanced Management, comes handy. Check out here: https://aka.ms/LearnSAM. One capability that I want to highlight today is – Restricted Content Discovery (RCD) policy. Imagine you have dedicated #SharePoint sites for your clients, say Client-A and Client-B. For users who happen to work with both the clients, you don’t want Copilot/Agents to intermingle the responses from those sites. Sounds like a complex problem to solve, Right? Look no further, RCD Policy for #SharePoint sites is the solution for this. You simply enable this with one click or one PowerShell cmdlet for those Client-A and Client-B sites, and voila those sites are excluded from organization wide view, be it through a prompt in Copilot/Agents or Enterprise Search. When users are within those Sites and using site level search or #SharePoint Agents for those sites, they do get the full benefit of AI, and it just works! Reap the full power of AI while at the same time not over-exposing your business-critical content. Check out the quick video below that showcases the outcome in Copilot responses, before and after, the RCD policy when applied to the HR site in a Contoso tenant. Learn more here: https://lnkd.in/ebMmk2vR
-
OWASP LLM02:2025 is Sensitive Information Disclosure. Second on the list. Not because it’s rare. Because it keeps happening in ways nobody expected. Microsoft Copilot recently surfaced confidential content from users' Drafts and Sent Items in AI-generated summaries, despite sensitivity labels and data loss prevention (DLP) policies being in place. Microsoft clarified Copilot did not bypass access controls. That is actually the whole problem: it did not need to. It inherited the user’s permissions and then applied its capabilities to data the user had long forgotten existed. Your AI assistant isn’t breaking in. It’s using the master key you handed it to look under the floorboards of rooms you haven’t entered in years. → LLMs surface what they can access, not just what they should share. → Netskope’s 2026 report found 47% of enterprise AI usage runs through personal accounts, invisible to your security team. → The average organization now sees data policy violations from gen AI double year over year. The fix is not mysterious: Least privilege access to data sources, output filtering before responses leave the model, and treating every prompt as an input channel that can be logged, stored, and reused. Your model inherits your permissions. Scope them like it will. Are you auditing your AI’s ‘Master Key’ today, or waiting for it to open the wrong door? Let’s discuss your strategy in the comments. 👇 #ae #agenticenterprise #AISecurity #OWASP #CyberSecurity #DataGovernance #Microsoft365 #ShadowAI #CISO #TechTrends2026 #InfoSec #EnterpriseAI #DLP #ZeroTrust
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development