Wednesday, March 26, 2025
Microsoft’s AI Agents Aim to Make Cybersecurity Teams’ Work Easier
If you peek behind the curtain at a network defender’s workflow, you might see hundreds—if not thousands—of emails marked as potential spam or phishing. It can take hours to sift through the messages to detect the most urgent threats. When a data breach occurs, figuring out what vital information was stolen is a critical—but often challenging—step for investigators.
Today, Microsoft announced a set of artificial intelligence agents aimed at making cybersecurity teams’ work a little easier. That could be good news for the many businesses large and small that use Microsoft 365 for their email, cloud storage, and other services.
Agentic AI is a buzzy new term for AI systems that can take actions on behalf of a human user. One step up from generative AI chatbots, AI agents promise to do actual work, such as executing code or performing web searches. OpenAI recently launched Deep Research mode for ChatGPT, which can conduct multi-step web searches to research complex topics or make shopping recommendations for major purchases. Google has been rolling out its own AI agents built off the latest version of Gemini.
A year ago, Microsoft launched Security Copilot, which introduced AI tools to its suite of security products: Purview, Defender, Sentinel, Intune, and Entra. Starting in April, users can opt in to having AI agents do specific tasks for them.
Microsoft says the agents can help streamline the work of security and IT teams, which are facing both a labor shortage and an overwhelming volume of threats.
Take phishing emails. In 2024, Microsoft says it detected 30 billion phishing emails targeting customers. At a company level, security teams often have to individually evaluate every potential phishing email and block malicious senders.
A new phishing triage agent inside Defender scans messages flagged by employees to ensure that the most urgent threats are addressed first. Among the tasks the agent performs are reviewing messages for language that suggests a scam and checking for malicious links. The most dangerous emails go to the top of a user’s queue. Other messages might be deemed false positives, or simple spam.
From there, the IT team can review a detailed description of the steps the agent took. The AI agent will suggest next steps—such as blocking all inbound emails from a domain associated with cybercriminals—and the human user can click a button to instruct the agent to perform those tasks.
If an email was mistakenly marked as spam, there’s a field for the user to explain in natural language why that email should not have been flagged, helping train the AI to be more accurate going forward.
Another AI agent helps prevent data loss—for example, looking for suspicious activity that might indicate an insider threat—and in the event of a data breach, helps investigators understand what information was stolen, whether a trade secret or customer credit card numbers.
Other AI agents ensure new users and apps have the right security protocols in place, monitor for vulnerabilities, or analyze the evolving threat landscape a company faces. In each case, a user can look under the hood to see what steps the AI agent took in its investigation. The user can make corrections, or with the click of a button, tell the agent to complete the tasks it suggested, such as turning on multi-factor authentication for certain users, or running a software update.
So far, the tools work across Microsoft services, such as Outlook, OneDrive and Teams, though integrations with third-party tools such as Slack or Zoom could be offered down the line. The tools also don’t take remediation steps without human approval. In the future, some of those tasks could also be automated.
BY JENNIFER CONRAD @JENNIFERCONRAD
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment