Securing the Future: Tackling Information Security Challenges in the Age of AI
Smiling woman working on cyber security in data center

Securing the Future: Tackling Information Security Challenges in the Age of AI

Generative Artificial Intelligence (Gen AI) technologies such as Copilot have revolutionized the way we work, offering unprecedented capabilities in content creation, data analysis, and automation. However, as with any transformative technology, these advancements come with significant information security concerns. But with an effective information security strategy in place, these challenges and risks can be managed in a controlled manner. 

Information Security Issues with Gen AI 

Data oversharing 

Gen AI systems like Copilot require vast amounts of data to function effectively. This data often includes sensitive and confidential information, which raises significant privacy concerns. Unauthorized access to such data can lead to data breaches, identity theft, and other malicious activities. You cannot rely on “security by obscurity” when using Gen AI tools, the tools will find all the data the users have access to so it will be necessary to get control of information sharing and permissions.  

To mitigate data oversharing you should do regular access reviews. They ensure that only authorized personnel have access to sensitive data and that access rights are reviewed and updated as necessary. 

  • Regular Audits: Conduct periodic audits to review who has access to sensitive data and whether their access is still justified. 

  • Role-Based Access Control (RBAC): Implement RBAC to ensure that users only have access to the data necessary for their roles. This minimizes the risk of unauthorized access. 

  • Access Revocation: Promptly revoke access for users who no longer require it, such as former employees or those who have changed roles within the organization. 

Intellectual Property Risks and Data leaks 

The use of Gen AI can inadvertently expose proprietary and confidential business information. AI models trained on proprietary data could potentially generate outputs that reveal intellectual property, trade secrets, or other sensitive information, thereby risking competitive advantage. 

Microsoft 365 Copilot should be seen as one of the safer tools to use since the information are stored in your tenant and Microsoft will never train their AI models on your data or prompts. 

One of the most effective ways to manage information security within Gen AI systems is using sensitivity labels. Sensitivity labels allow organizations to classify data based on its level of sensitivity and apply appropriate security controls. 

Documents created with Copilot will automatically inherit the sensitivity label of the source material, together with Data Loss Prevention policies ensuring good protection for unintentional data leaks of sensitive information. 

Data Integrity (obsolete insights) 

The integrity of the data used by Gen AI is critical. Inaccurate, old or manipulated data can lead to incorrect outputs and decisions, which can have serious repercussions for businesses. Ensuring the accuracy and reliability of input data is a fundamental challenge when introducing Gen AI tools for your organization. 

Retention labels help manage the lifecycle of data, ensuring that it is retained only for as long as necessary and disposed of securely. 

  • Retention Policies: Establish retention policies that define how long different types of data should be retained. These policies should be based on legal, regulatory, and business requirements. 

  • Automated Enforcement: Use automated systems to apply retention labels and enforce retention policies. This reduces the risk of non-compliance and ensures consistency. 

  • Secure Disposal: Implement secure data disposal methods to ensure that data is irretrievably destroyed when it is no longer needed. This prevents unauthorized access to obsolete data. 

Another good way is to backup old data where Copilot or other AI tools do not have access. 

Data Governance is key to a successful implementation of AI 

The integration of Gen AI technologies like Copilot into business processes offers immense opportunities for innovation and efficiency. However, it also necessitates a robust approach to information security and Data Governance. By implementing sensitivity labels, conducting regular access reviews, and applying retention labels, organizations can mitigate the information security risks associated with Gen AI. These measures ensure that data privacy, integrity, and regulatory compliance are maintained, thereby safeguarding the organization’s most valuable asset—its information. 


About the author: Stefan Agervald is a Digital Worklife Strategy Consultant at Nexer with nearly 20 years of experience in Information Management. He specialises in Microsoft 365, guiding clients on how to make the most of modern collaboration tools with a focus on information security and generative AI.

Article content


To view or add a comment, sign in

More articles by Nexer Enterprise Applications

Others also viewed

Explore content categories