Picture this. You switch on Copilot, your team starts asking questions, and within days it’s drafting emails, summarising meetings, and pulling data from your files.
It works well. Then someone asks it something it probably shouldn’t answer, and it does, because nothing in your environment told it not to.
That’s not a Copilot problem. It’s a data governance problem that was already there before Copilot arrived. AI didn’t create the gap, it just moved through it faster than any person ever could.
This is the conversation I find myself in with business leaders more and more in 2026. Not about whether AI is worth using, most are already well past that question. The harder question is what happens when AI scales and the controls underneath it haven’t.
Microsoft Purview and Microsoft Defender are what close it, and for most Australian businesses, closing that gap before AI scales any further is exactly where attention should be right now.
AI Moves Fast Through Whatever You Already Have
Permissions, access, and why they matter more now
Most businesses have permission settings that were configured years ago and haven’t been reviewed since. Staff who’ve left still have access. Files that should be restricted are sitting in shared drives. Sensitive data isn’t labelled.
When a person accesses those files, it’s slow. They navigate folders, open things one at a time, and usually stay close to what they need. When AI accesses your environment, it doesn’t work that way. It can read across your entire Microsoft 365 environment quickly, and it will surface whatever it can reach.
The risk isn’t the AI going rogue. It’s the AI doing exactly what it was built to do, inside an environment that wasn’t ready for that level of access. This plays out across a number of common Copilot security misconceptions that businesses run into when they’re early in their AI rollout.
So what does a prepared environment look like? It starts with two tools most Microsoft 365 customers already have access to: Purview and Defender.
What Is Microsoft Purview?
The policy layer for your data
Microsoft Purview is Microsoft’s data security and compliance platform. Think of it as the rulebook for your data. It classifies what you have, labels it by sensitivity, sets rules around how it can be shared or processed, and monitors for breaches.
For businesses running AI, Purview’s most important job is stopping sensitive information from flowing where it shouldn’t. It can prevent personally identifiable information, financial records, or custom data categories from being included in AI prompts or shared outside your business.
That boundary matters a lot when AI is running across thousands of interactions a day.
What’s changed in 2026
Microsoft has been moving quickly on Purview’s AI capabilities. New features rolled out between March and May 2026 allow security teams to run AI-powered investigations directly from audit logs and data loss prevention (DLP) alerts. Instead of manually piecing together what happened after a suspicious event, your team can pull every related file and action into one view and run a deep analysis in one place.
Purview is also now integrated with the Microsoft 365 Admin Centre through the Copilot Control System, giving leaders a direct view of AI-related data risk without needing to jump between platforms.
One of the most significant changes rolling out this month is the extension of Insider Risk Management to AI agents, including Copilot Studio and Agent 365. For the first time, businesses can create policies that detect or block risky activities carried out by AI agents, not just people. Until now, governance policies were built around human behaviour.
That’s changed, and it matters because AI agents can act faster and at greater scale than any individual ever could.
There’s also a new capability in preview called the Data Security Posture Agent, which proactively surfaces credentials and sensitive data buried across your data estate before anyone else finds them. It’s essentially AI being used to find your own exposure before an attacker does.
If your Purview configuration hasn’t been reviewed since before AI was in the picture, that’s the first thing worth fixing. Getting Purview, Defender, and Microsoft Entra working together as a connected system is what gives you real visibility and control as AI scales, rather than three tools running in parallel without talking to each other.
Which brings us to Defender, and why it’s become a lot more relevant to business leaders in 2026.
What Is Microsoft Defender?
Threat detection built for an AI environment
Microsoft Defender is Microsoft’s threat protection platform. It monitors your endpoints (devices), cloud workloads, identities, and data for signs of attack or unusual behaviour. It’s been a core part of enterprise security for years, but in 2026 it’s been extended to cover AI-specific risks.
The most significant recent update is Defender’s ability to detect and respond to unusual behaviour from AI agents. AI agents are automated tools that act on behalf of users, and they’re becoming a standard part of many business workflows.
They’re useful, but they’re also a new attack surface. If an agent is manipulated through a malicious prompt, starts accessing data outside its intended scope, or behaves in a way it shouldn’t, Defender can identify and block that in near real-time.
This is what Microsoft described at RSA 2026 as an agentic AI security strategy. The idea is straightforward: if AI is going to act at speed and scale, your security needs to match it.
There’s also a practical action item this month. Microsoft is retiring older versions of the Defender for Endpoint mobile app on iOS and Android from 10 May 2026.
Any device still running an app version released before February 2026 will lose cloud connectivity and threat protection updates from that date. It’s worth confirming with your IT team that every device across your business is running a supported version before that deadline hits.
Identity is still the front line
The April 2026 Defender update also reinforced something consistent across the last few years: identity-based attacks are still the most common way businesses are compromised. Credential theft, token hijacking, and AI-generated phishing are all increasing.
Defender now includes autonomous identity threat detection. It can evaluate high-volume identity alerts, separate real threats from noise, and give your security team clear verdicts to act on rather than a flood of notifications to sift through.
There’s also a predictive capability that anticipates where an attacker might move next and tightens controls before that path is taken. Automatic attack disruption status is now visible in the incident page Activities tab, so your team can see exactly what Defender stopped and why, without having to dig for it.
How Purview and Defender Work Together
These tools are strongest when they’re connected, not run as separate products.
Purview sets the rules. Defender watches for anything that breaks them. Microsoft Entra handles identity, confirming who is accessing what and whether that access looks right. When all three are integrated, you get a clear, connected picture: what data you have, who’s reaching it, through which tools, and whether something unusual is happening.
For businesses already using or planning to build AI agents, this architecture becomes even more important. The way Entra Agent ID ties into Purview controls is a practical example of how identity, governance, and threat detection need to work as one system rather than three separate conversations.
That connected approach is what separates businesses that adopt AI confidently from those that adopt it and quietly hope nothing goes wrong.
What This Means for Australian Business Leaders
The scale of AI adoption has changed the stakes
As of late April 2026, Microsoft reported more than 20 million paid Copilot users worldwide, with weekly engagement at a similar level to Outlook. AI isn’t a pilot anymore. It’s in daily workflows at scale, and the data governance and security decisions you make now directly affect how safely that plays out in your environment.
With Microsoft committing A$25 billion to AI infrastructure in Australia over the coming years, the pace of adoption is only going to increase. Australian businesses that have their governance in order will be better placed to move quickly. Those that don’t will carry more risk as that growth continues.
Privacy Act obligations don’t pause for AI
Under Australia’s Privacy Act, personal data must only be accessible to authorised people for authorised purposes. If your AI environment is surfacing data it shouldn’t, because permissions are too broad or sensitivity labels haven’t been applied, you may have a compliance issue before you’ve noticed anything is wrong.
Purview and Defender aren’t just good security practice. For businesses handling personal or sensitive data, they’re part of how you stay on the right side of your legal obligations.
If you’re not sure what your AI tools can currently reach, that’s the right place to start.
Four Questions Worth Asking Your IT Team This Week
You don’t need to be technical to start this conversation. Here are four questions that will quickly tell you whether your environment is keeping up:
Are our sensitivity labels and DLP policies current? If they haven’t been reviewed since before Copilot was active in your environment, they likely need a refresh.
Is Defender configured to monitor AI agent activity? Microsoft’s new agentic security capabilities are available now. Your team should know whether they’re active.
Are all mobile devices running the latest Defender for Endpoint app? Older versions on iOS and Android lose threat protection from 10 May 2026. This is a deadline worth checking before the end of this week.
Can I see AI-related data risk in our Admin Centre? Purview’s integration with the Copilot Control System should give you this view. If it’s not visible, something isn’t connected correctly.
Getting Started
If you’re already on Microsoft 365, you likely have access to much of what’s described here. Purview and Defender are included in many Microsoft 365 licences. For most businesses, the gap isn’t access. It’s configuration and knowing what to look for.
At CG TECH, we help Australian businesses review their Microsoft security posture, configure Purview and Defender for their AI environment, and build the governance foundations that support confident AI adoption.
If you’d like to understand where your business stands, reach out and let’s start with a conversation.
About the Author
Carlos Garcia is the Founder and Managing Director of CG TECH, where he leads enterprise digital transformation projects across Australia.
With deep experience in business process automation, Microsoft 365, and AI-powered workplace solutions, Carlos has helped businesses in government, healthcare, and enterprise sectors streamline workflows and improve efficiency.
He holds Microsoft certifications in Power Platform and Azure and regularly shares practical guidance on Copilot readiness, data strategy, and AI adoption.
Picture this. You switch on Copilot, your team starts asking questions, and within days it’s drafting emails, summarising meetings, and pulling data from your files.
It works well. Then someone asks it something it probably shouldn’t answer, and it does, because nothing in your environment told it not to.
That’s not a Copilot problem. It’s a data governance problem that was already there before Copilot arrived. AI didn’t create the gap, it just moved through it faster than any person ever could.
This is the conversation I find myself in with business leaders more and more in 2026. Not about whether AI is worth using, most are already well past that question. The harder question is what happens when AI scales and the controls underneath it haven’t.
Microsoft Purview and Microsoft Defender are what close it, and for most Australian businesses, closing that gap before AI scales any further is exactly where attention should be right now.
AI Moves Fast Through Whatever You Already Have
Permissions, access, and why they matter more now
Most businesses have permission settings that were configured years ago and haven’t been reviewed since. Staff who’ve left still have access. Files that should be restricted are sitting in shared drives. Sensitive data isn’t labelled.
When a person accesses those files, it’s slow. They navigate folders, open things one at a time, and usually stay close to what they need. When AI accesses your environment, it doesn’t work that way. It can read across your entire Microsoft 365 environment quickly, and it will surface whatever it can reach.
The risk isn’t the AI going rogue. It’s the AI doing exactly what it was built to do, inside an environment that wasn’t ready for that level of access. This plays out across a number of common Copilot security misconceptions that businesses run into when they’re early in their AI rollout.
So what does a prepared environment look like? It starts with two tools most Microsoft 365 customers already have access to: Purview and Defender.
What Is Microsoft Purview?
The policy layer for your data
Microsoft Purview is Microsoft’s data security and compliance platform. Think of it as the rulebook for your data. It classifies what you have, labels it by sensitivity, sets rules around how it can be shared or processed, and monitors for breaches.
For businesses running AI, Purview’s most important job is stopping sensitive information from flowing where it shouldn’t. It can prevent personally identifiable information, financial records, or custom data categories from being included in AI prompts or shared outside your business.
That boundary matters a lot when AI is running across thousands of interactions a day.
What’s changed in 2026
Microsoft has been moving quickly on Purview’s AI capabilities. New features rolled out between March and May 2026 allow security teams to run AI-powered investigations directly from audit logs and data loss prevention (DLP) alerts. Instead of manually piecing together what happened after a suspicious event, your team can pull every related file and action into one view and run a deep analysis in one place.
Purview is also now integrated with the Microsoft 365 Admin Centre through the Copilot Control System, giving leaders a direct view of AI-related data risk without needing to jump between platforms.
One of the most significant changes rolling out this month is the extension of Insider Risk Management to AI agents, including Copilot Studio and Agent 365. For the first time, businesses can create policies that detect or block risky activities carried out by AI agents, not just people. Until now, governance policies were built around human behaviour.
That’s changed, and it matters because AI agents can act faster and at greater scale than any individual ever could.
There’s also a new capability in preview called the Data Security Posture Agent, which proactively surfaces credentials and sensitive data buried across your data estate before anyone else finds them. It’s essentially AI being used to find your own exposure before an attacker does.
If your Purview configuration hasn’t been reviewed since before AI was in the picture, that’s the first thing worth fixing. Getting Purview, Defender, and Microsoft Entra working together as a connected system is what gives you real visibility and control as AI scales, rather than three tools running in parallel without talking to each other.
Which brings us to Defender, and why it’s become a lot more relevant to business leaders in 2026.
What Is Microsoft Defender?
Threat detection built for an AI environment
Microsoft Defender is Microsoft’s threat protection platform. It monitors your endpoints (devices), cloud workloads, identities, and data for signs of attack or unusual behaviour. It’s been a core part of enterprise security for years, but in 2026 it’s been extended to cover AI-specific risks.
The most significant recent update is Defender’s ability to detect and respond to unusual behaviour from AI agents. AI agents are automated tools that act on behalf of users, and they’re becoming a standard part of many business workflows.
They’re useful, but they’re also a new attack surface. If an agent is manipulated through a malicious prompt, starts accessing data outside its intended scope, or behaves in a way it shouldn’t, Defender can identify and block that in near real-time.
This is what Microsoft described at RSA 2026 as an agentic AI security strategy. The idea is straightforward: if AI is going to act at speed and scale, your security needs to match it.
There’s also a practical action item this month. Microsoft is retiring older versions of the Defender for Endpoint mobile app on iOS and Android from 10 May 2026.
Any device still running an app version released before February 2026 will lose cloud connectivity and threat protection updates from that date. It’s worth confirming with your IT team that every device across your business is running a supported version before that deadline hits.
Identity is still the front line
The April 2026 Defender update also reinforced something consistent across the last few years: identity-based attacks are still the most common way businesses are compromised. Credential theft, token hijacking, and AI-generated phishing are all increasing.
Defender now includes autonomous identity threat detection. It can evaluate high-volume identity alerts, separate real threats from noise, and give your security team clear verdicts to act on rather than a flood of notifications to sift through.
There’s also a predictive capability that anticipates where an attacker might move next and tightens controls before that path is taken. Automatic attack disruption status is now visible in the incident page Activities tab, so your team can see exactly what Defender stopped and why, without having to dig for it.
How Purview and Defender Work Together
These tools are strongest when they’re connected, not run as separate products.
Purview sets the rules. Defender watches for anything that breaks them. Microsoft Entra handles identity, confirming who is accessing what and whether that access looks right. When all three are integrated, you get a clear, connected picture: what data you have, who’s reaching it, through which tools, and whether something unusual is happening.
For businesses already using or planning to build AI agents, this architecture becomes even more important. The way Entra Agent ID ties into Purview controls is a practical example of how identity, governance, and threat detection need to work as one system rather than three separate conversations.
That connected approach is what separates businesses that adopt AI confidently from those that adopt it and quietly hope nothing goes wrong.
What This Means for Australian Business Leaders
The scale of AI adoption has changed the stakes
As of late April 2026, Microsoft reported more than 20 million paid Copilot users worldwide, with weekly engagement at a similar level to Outlook. AI isn’t a pilot anymore. It’s in daily workflows at scale, and the data governance and security decisions you make now directly affect how safely that plays out in your environment.
With Microsoft committing A$25 billion to AI infrastructure in Australia over the coming years, the pace of adoption is only going to increase. Australian businesses that have their governance in order will be better placed to move quickly. Those that don’t will carry more risk as that growth continues.
Privacy Act obligations don’t pause for AI
Under Australia’s Privacy Act, personal data must only be accessible to authorised people for authorised purposes. If your AI environment is surfacing data it shouldn’t, because permissions are too broad or sensitivity labels haven’t been applied, you may have a compliance issue before you’ve noticed anything is wrong.
Purview and Defender aren’t just good security practice. For businesses handling personal or sensitive data, they’re part of how you stay on the right side of your legal obligations.
Your data foundations matter
Many of the configuration gaps we see in Purview start well before the security settings. They start with data that hasn’t been properly structured, classified, or cleaned up inside Microsoft 365 in the first place.
If you’re not sure what your AI tools can currently reach, that’s the right place to start.
Four Questions Worth Asking Your IT Team This Week
You don’t need to be technical to start this conversation. Here are four questions that will quickly tell you whether your environment is keeping up:
Getting Started
If you’re already on Microsoft 365, you likely have access to much of what’s described here. Purview and Defender are included in many Microsoft 365 licences. For most businesses, the gap isn’t access. It’s configuration and knowing what to look for.
At CG TECH, we help Australian businesses review their Microsoft security posture, configure Purview and Defender for their AI environment, and build the governance foundations that support confident AI adoption.
If you’d like to understand where your business stands, reach out and let’s start with a conversation.
About the Author
Carlos Garcia is the Founder and Managing Director of CG TECH, where he leads enterprise digital transformation projects across Australia.
With deep experience in business process automation, Microsoft 365, and AI-powered workplace solutions, Carlos has helped businesses in government, healthcare, and enterprise sectors streamline workflows and improve efficiency.
He holds Microsoft certifications in Power Platform and Azure and regularly shares practical guidance on Copilot readiness, data strategy, and AI adoption.
Sources
Recent Posts
Popular Categories
Archives