Exclusive: Arctic Wolf builds out agentic security
Mon, 11th May 2026 (Today)
Arctic Wolf is sharpening its AI-led security operations strategy as cyber attacks become faster, more iterative and harder for conventional response models to contain.
Machine speed
Arctic Wolf protects more than 10,000 organisations globally and has built its strategy around the view that AI is changing both external threats and internal security operations. Its Aurora Platform analyses more than 9 trillion security events each week, giving the company visibility across threat activity and customer environments.
"Arctic Wolf's mission is to end cyber risk. That might sound ambitious, but it reflects how we think about the problem. Not as something to manage indefinitely, but something organisations should be able to systematically reduce over time," said Dan Schiappa, President, Technology and Services, Arctic Wolf.
In Australia, Arctic Wolf works with mid-market organisations including Arts Centre Melbourne, Parramatta Eels and Brighton Grammar School. It employs around 70 local staff and has expanded from managed detection and response into a broader security operations platform spanning detection and response, exposure management and threat intelligence.
"The focus isn't on handing over more tools; it's on providing a fully functioning capability that helps organisations reduce risk in a measurable way," added Schiappa.
The discussion comes as Anthropic continues expanding its enterprise AI footprint globally and across the Asia-Pacific region, including Australia, where adoption of generative AI platforms such as Claude has accelerated. The company has increasingly positioned itself around enterprise-grade and safety-focused AI systems, with cybersecurity emerging as a growing area of focus amid industry concerns about agentic AI capabilities.
In particular, Anthropic's Project Glasswing points to a broader shift in the threat landscape, rather than a narrow issue tied to one model or vendor. The key change is the shrinking time between research, testing and exploitation.
"What Project Glasswing really highlights is how quickly everything is shifting. If you think about how attacks have traditionally worked, there were always natural limits. Even well-resourced attackers had to spend time researching vulnerabilities, testing what would work and refining their approach as they go. That created friction in the system and defenders, in many cases, relied on that time to catch up," added Schiappa.
"With models like Claude Mythos, you're looking at a scenario where parts of that process can happen almost instantly. AI can test multiple paths, learn what works, and adapt far faster than a human team could," added Schiappa.
Schiappa said Claude Mythos should not be treated as an isolated example. The UK government's AI Security Institute has been testing multiple models, he said, and found many capable of supporting similar activity. While zero-day exploits account for less than 5% of cyber attacks, adversarial AI could affect the full range of attacks.
"So the story here isn't more AI in attacks. It's that the pace of the entire attack lifecycle is accelerating. Things are now happening at machine speed, versus human speed," added Schiappa.
Shadow AI
AI is also creating new risks inside organisations as employees adopt tools outside formal governance and security controls.
Attackers can now use AI to research targets, craft phishing emails and move through an environment faster and with less effort. They can generate more convincing lures, test multiple approaches and adjust based on what works.
"So it's not just that attacks are faster. They are more iterative. Instead of a single campaign, it becomes a continuous process of testing and refining," added Schiappa.
Employees are also using public or unapproved AI systems to speed up work. That can include entering sensitive data into external tools or relying on systems that sit outside security monitoring.
"From a security perspective, that creates blind spots. These tools often sit outside traditional monitoring, so organisations don't always know what data is being shared or how it's being used," added Schiappa.
Many Australian organisations are already facing this issue because AI adoption is moving faster than governance and visibility. Security teams are dealing with two connected pressures: accelerated external threats and an expanding internal attack surface.
"Everything that Anthropic is doing recently is showing us that AI adoption is moving quickly, and that organisations are ready to integrate it into day-to-day operations," added Schiappa.
"What's interesting is the shift toward more agentic systems. AI that doesn't just assist but can take action. That's a meaningful change from earlier generations of tools," added Schiappa.
Agentic SOC
Arctic Wolf's work with Anthropic forms part of its AI strategy, but its focus is not general-purpose AI. The company is applying models to security operations, where precision, customer context and reliability are critical.
"Working with Anthropic gives us access to advanced models, but the key is how we apply them. We build in guardrails, fine tune the models to create our own SLMs, ensure deterministic behaviour where possible, and keep humans in the loop," added Schiappa.
Arctic Wolf's curated data set allows it to bring customer context into AI decision-making, improving accuracy compared with general frontier models. The approach uses AI to analyse large volumes of data, prioritise signals and accelerate investigations, while human analysts remain involved.
"In cybersecurity, you can't afford unpredictable outcomes. So rather than replacing people, we use AI to handle the scale and speed, analysing large volumes of data, prioritising signals, accelerating investigations, while experienced analysts remain in the loop to fine tune the models at scale, and collectively drive the super intelligent platform for the most deterministic outcomes in the industry," added Schiappa.
At RSA, Arctic Wolf announced updates around the Aurora Platform and its Agentic SOC. The main theme was a shift in how security operations are run.
"With the Aurora platform and the Agentic SOC, we're not just adding automation to existing processes - we're redesigning the SOC with a new AI-led operating model. AI takes the lead role in driving investigations and workflows, rather than just supporting them," added Schiappa.
The Aurora Platform ingests and analyses trillions of telemetry events each week. Through its Security Operations Graph, that data is enriched with business context, historical case data and human-validated expertise to support faster decisions.
A key component is the Swarm of Experts. Oversight Agents guide and govern investigations. A Swarm Orchestrator breaks cases into workstreams and assigns agents. Authoritative Agents handle core functions such as triage, investigation and response. Process Agents manage enrichment, evidence gathering, indicator checks and SOAR actions.
"The key difference is that this work happens in parallel, rather than moving through a traditional Tier 1, Tier 2, or Tier 3 model. Instead of a human analyst working through each step sequentially, these agents can investigate, correlate and build a picture of what's happening in parallel," added Schiappa.
Human oversight
For security teams, the operational change is intended to reduce the time analysts spend triaging alerts and gathering context. Analysts instead work from incidents that have already been enriched, with agents collecting evidence in parallel.
"If an agent has enough validated evidence, it proceeds. If not, it stops and escalates to another agent or human. That shifts the role of the analyst from a tier operator, to an expert driven collaboration between AI and analyst," added Schiappa.
The model also aims to improve visibility into shadow AI. Because Arctic Wolf already ingests telemetry across customer environments, the platform can begin to identify unauthorised AI usage patterns and bring them into security investigations.
Trust in AI for security operations depends less on broad confidence in AI and more on how systems behave in practice. Organisations need predictable systems, explainable decisions and clear human accountability.
"Our view is that fully autonomous security isn't where most organisations should be heading. The risks around false positives, missed context or unintended actions are too high," added Schiappa.
"So the model we've taken is deliberately human-in-the-loop. AI does the heavy lifting, but humans remain accountable," added Schiappa.
AI should now be treated as a new attack surface, alongside cloud and endpoints. Risks include sensitive data being entered into models, output manipulation and the use of tools outside approved systems. For SMBs, the main challenge is often visibility, because these risks may not appear in traditional security tools.
"The priority isn't to slow down AI adoption because that's already happening all around us, but to make sure it's being introduced with the right foundations in place," added Schiappa.