Go back

Your AI employee with vast permissions: Security risks of Microsoft 365 Copilot

As we let GenAI into our cubicles and virtual meetings, let’s ponder whether we’re inviting a helpful colleague or a Trojan horse. Microsoft 365 Copilot is here to revolutionize work but could potentially leave the back door wide open.

Oren Saban 2 May 2024 7 min read
Apex

As we let GenAI into our cubicles and virtual meetings, let’s ponder whether we’re inviting a helpful colleague or a Trojan horse. Microsoft 365 Copilot is here to revolutionize work but could potentially leave the back door wide open.

Copilot doesn’t just peek over your shoulder; it can read, write, and share just about anything in your Microsoft 365 suite (on behalf of the user using it, of course).

There are many blogs about Microsoft 365 Copilot capabilities, how it increases productivity (Future Of Work), and it’s ROI (Forrester, Lantern).

But we are not here to talk about how wonderful Microsoft products are, as security people, we will discuss the risks you need to know about when enabling Microsoft 365 Copilot in your organization.

“Microsoft already holds all my data, why should I be concerned?”

Great question! We’re here to explain 🙂

Data Exposure and excessive permissions:

The architecture of Microsoft 365 Copilot could amplify existing data access issues within organizations. Since Copilot accesses data based on the user’s permissions, it could inadvertently expose sensitive information, especially in environments where excessive permissions are common. This easy access, paired with generally poor implementation of sensitivity labels and rights management, could lead to unintended data leaks.


Before:

A remote file with the company’s financials can be accessed if someone finds it.

Now:

Any curious employee can ask with natural language: “What’s the financial status of my company?”

”It’s important that you’re using the permission models available in Microsoft 365 services, such as SharePoint, to help ensure the right users or groups have the right access to the right content within your organization.”
Source: Microsoft documentation


Internal and External Threats:

Microsoft 365 Copilot is designed to be secure against unauthorized access, but insider threat or compromised user can harness Copilot to do more damage in shorter time. One example will be dramatic decrease in dwell time (Quick reconnaissance technique such as ”Who am I” is much easier now), where Copilot fast and detailed responses can be exploited by threats for discovery, collection and exfiltration. Another example is injection attack, where harmful commands are hidden within normal requests, tricking Copilot into revealing information or interrupting operations unknowingly. As you probably saw in the last year, tricking an LLM do go break it’s guidelines is as easy as tricking a 10 years old child, and you just gave this child the unlimited permissions to the data of your organization.
For more details on prompt injection vulnerabilities in Microsoft 365 Copilot, you might find this blog insightful:
Whoami Conditional Prompt Injection Instructions.

Decision Making Impact:

Relying heavily on Microsoft 365 Copilot for critical decisions poses a security and legal risk as well. Dependence on potentially inaccurate or incomplete AI insights can lead to poor decisions not aligned with the organization’s best practices. This vulnerability can increase if the AI is manipulated or accesses compromised data, leading to flawed or hazardous decisions. Imagine a document that was sent to the entire company’s email and is now being repetitively used by 365 Copilot, leading to a serious impact on the content it generates, as it uses the document in the context of its prompts.

Biases:

Biases in AI systems like Microsoft’s Copilot pose serious concerns. For example, it may generate images reinforcing negative stereotypes from neutral prompts, such as the case where Microsoft’s Copilot image tool generated ugly Jewish stereotypes, anti-Semitic tropes. Such biases aren’t limited to image generation; Large Language Models trained on internet data can reflect prejudiced content. Despite Microsoft efforts, more work is needed to prevent offensive outputs. These scenarios highlight the need for ongoing vigilance against biases which can cause inaccuracies, reputational damage, and legal issues.

IP Infringement:

While Microsoft does not claim ownership of the outputs generated by Copilot, and commits to defending users against copyright claims, the inherent risk lies in the potential reputational damage. If an organization inadvertently uses content that infringes on someone else’s intellectual property—despite the generative AI producing similar responses for different users—the damage to the brand’s reputation can be significant and lasting. This risk is especially pronounced because resolving these disputes, even if legally covered by Microsoft, does not immediately repair trust or business relationships affected by such incidents.

Hallucinations:

These fabricated outputs from Copilot could result in misinformation spreading within an organization, leading to incorrect business decisions and potential breaches if decisions are made based on incorrect data simulations. Whether Copilot hallucinates about company’s policies, or tells a user that maybe he should end his life, hallucinations are a risk and they are here to stay.

“But I always have a human in the loop”

Yes, you do, for now. However, it’s worth noting that humans may sometimes overlook Copilot responses. As LLMs become more integrated into your daily tools, making it easier to let AI do your work (write an email → write a blog → write our Q4 roadmap → write the strategy… you got it) you may unknowingly start to rely on them more. This could affect your investment and strategic decisions.

Furthermore, Copilot will soon be capable of taking automatic actions on behalf of the user. I discussed this in my last blog: AI agents: The new employee you’ve just hired.

Coming soon: Copilot can take action on prompts by analyzing the input and using machine learning techniques to generate new content. Copilot can look at the commands available in the plugin based on the descriptions of it and its parameters. Copilot will then use relevant data it has access to and “stuff” these into the parameters and call the command.
Source:
Microsoft documentation

Microsoft’s Mitigation Offerings

Microsoft lately released it’s AI hub, which offers tools like DLP policies and privacy controls to manage Copilot. Keeping permissions and sensitivity labels updated ensures its proper functioning, while Copilot guardrails try to ensure responsible AI and prevent harmful outputs. E5 customers can configure Copilot policies in the Microsoft Purview portal, through Communication compliance and DLP. Copilot’s file access during user interactions can be found in the audit tab.

What Microsoft Doesn’t Give You

If you’ve attempted to monitor your organization’s Copilot interactions through Purview, you’ve likely found it challenging. Microsoft utilizes its existing solutions to safeguard this new tech area, which is adequate but not optimal, with growing gaps over time. Challenges include:

Policy Customization:

Users must precisely know what to look for and create their own AI usage policies while maintaining them across multiple different portals. The lack of an end-to-end view of AI policy enforcement and usage creates an additional burden for the already busy security and compliance teams.

Basic DLP and Insider Threat Management:

Microsoft’s DLP and insider threat policies are fundamental, often yielding high false positive rates, primarily focusing on data leakage—assuming effective configuration and maintenance of permissions and sensitivity labels.

Unaddressed Risks:

Other significant risks such as threat detection, biased decision-making impacts, and IP infringement are not directly addressed by these tools, leaving gaps in comprehensive security coverage.

Unknown Unknowns:

Leveraging traditional DLP and compliance detection tools for AI-driven interactions is problematic. These systems may not fully understand or adapt to AI complexities, resulting in missed detection of subtle or sophisticated threats. As AI technology rapidly advances, these older systems struggle to keep up, potentially overlooking emerging vulnerabilities.

Apex addresses these gaps by offering advanced monitoring capabilities that go beyond basic DLP and insider threat detection. It provides comprehensive oversight of Copilot interactions, including out-of-the-box & Custom policy management that reduces false positives and extends protection to areas like biased decision-making and IP infringement. Apex Security ensures that your AI integrations align with both operational needs and stringent compliance standards, enhancing your organization’s security posture comprehensively. On top of that, Apex will show you your AI-BoM (AI Bill of Materials) – so you will keep trace of all the AI generated content in your organization.

Conclusion: Balancing Innovation with Security

While Microsoft 365 Copilot can turbocharge our productivity, it’s essential to navigate this new territory with a detailed map and a good set of brakes. Let’s embrace the future of work, but maybe keep AI on a short leash.

AI is already the core of your company, request Microsoft 365 Copilot solution brief

Related Resources

Embracing AI: The New Frontier in Cybersecurity

Embracing AI: The New Frontier in Cybersecurity

In today’s digital world, the rate at which Artificial Intelligence (AI) is being adopted is nothing short of revolutionary, outpacing any previous digital transformations. OpenAI launched ChatGPT in November 2022 and thanks to its delightful product and underlying technology, reached the 100 million users faster than any other consumer service. Unsurprisingly, the cybersecurity risks and […]
Do You Really Need Another Security Product?!

Do You Really Need Another Security Product?!

The combination of booming security tools and alerts and security talent shortage, might lead to the effort of security tools consolidation. While this is true for most of security endeavours, AI introduced new risks and challenges that cannot be met by the existing stack.
From Autocomplete to Autocompromise: GitHub Copilot’s Security Challenges

From Autocomplete to Autocompromise: GitHub Copilot’s Security Challenges

Imagine a tool so powerful it could write up to 80% of your code. Sounds like science fiction? Well, it’s closer to reality than you might think. GitHub’s CEO has stated that their AI-driven code companion, Copilot, will be capable of handling the lion’s share of coding tasks “sooner than later.” While the productivity benefits […]