CFOtech Australia - Technology news for CFOs & financial decision-makers
Story image

Exclusive: Shannon Murphy of Trend Micro on securing AI risks

Yesterday

Cybersecurity leaders must stop treating artificial intelligence (AI) as simply another threat vector and start engaging across departments to build shared understanding, according to Shannon Murphy, Senior Manager, Global Security and Risk Strategy at Trend Micro.

Sitting down with TechDay at the Trend Micro World Tour 2025 event in Sydney, Murphy, who has worked across consumer, financial and enterprise technologies over the past decade, highlighted the importance of visibility when managing the risks posed by generative AI in enterprise environments.

"Shadow IT is not new, and now we have shadow AI," she said. "Developer teams and business units are experimenting with AI tools like ChatGPT or Claude without oversight, introducing new attack surfaces."

Murphy explained that while much of the conversation around AI security revolves around technical solutions, success depends on communication between security, operations and development teams. "You can't technology your way out of every single problem," she said.

"You have to know what people are using, and then have the conversation."

The Trend Micro World Tour, which includes over a dozen cities globally, provides Murphy with "insights into different regional priorities."

"In Los Angeles, security teams are influenced by the media sector, while in Nashville it's healthcare," she said. "Each city brings its own flavour."

On the topic of vulnerabilities, Murphy highlighted how internal AI applications like Microsoft Copilot often lack granular access controls, leading to accidental or malicious data exposure. "People are prompting these tools and getting responses they shouldn't, simply because access permissions are too flat," she explained.

This lack of guardrails is also creating new avenues for attacks like prompt injection and model manipulation.

While such attacks are still relatively rare due to their sophistication, Murphy warned organisations must prepare for them now.

"Prompt injection and model manipulation can be detected with the right tools, but only if you have visibility and identity-level detection," she said. "You need good access control and anomaly detection to pick up when something unusual is happening."

Murphy also stressed that security teams must adopt a "shift left" mindset - embedding security early in development workflows. "Whether you're using open-source code or developing your own application, security needs to be part of the design phase," she said.

A cornerstone of Murphy's approach is proactive security through AI.

She described Trend Micro's use of generative AI to map out likely attack paths before they occur, based on threat intelligence and misconfiguration data.

"It's almost like time travel," she said. "We can show you where an attacker would most likely strike and help you close those gaps before they're exploited."

The visualisation of attack paths enables better prioritisation, particularly when vulnerabilities occur in high-value areas. "A medium-severity vulnerability tied to your CFO is a bigger issue than a high-severity one on an irrelevant asset," she said.

To build an effective AI governance framework, Murphy recommends three steps: gain visibility, establish relationships with business units, and develop an AI use policy. She even created a "Mad Libs"-style template to help teams draft such policies.

"Talk to your teams. Ask what they're using and why. Match that with your data," she said. "Only then can you start defining what acceptable use looks like."

A major challenge, she noted, is that many businesses still lack visibility into how AI tools are being used internally. "It's a prioritisation issue," she said. "Security teams are still dealing with misconfigurations, patch delays and legacy systems. That firefighting leaves little time for forward planning."

Murphy called on security leaders to reframe how they speak to the business.

"We talk about shrinking the attack surface, but to the business, that's their value surface—the tools they use to drive revenue," she said.

Instead of creating roadblocks, security should act as an advisor. Murphy sees the best results when responsibility is shared: "Security teams should guide, but the business unit should own the risk. That's how you'll get buy-in."

Collaboration is also essential in the age of agentic AI - automated systems that take initiative and interact with other agents. Murphy stressed that AI models must be monitored continuously, just like any other system.

"Continuous monitoring is critical, not just for AI, but across your entire asset estate," she said. "You need to assess, prioritise and mitigate - on a loop."

This is especially important in development environments. As organisations deploy AI in production, penetration testing, red teaming and strong application security must be part of the QA process.

Ethics also come into play. Murphy pointed to examples of AI systems inheriting bias from existing datasets, such as in hiring tools.

"Sometimes AI helps you find flaws in your own processes," she said.

For teams aiming to be innovative while maintaining strong security, she advised using techniques like double anonymisation. "That way, sensitive data never gets exposed, even when leveraging cloud services."

Ultimately, Murphy said the organisations that succeed will be those that embrace collaboration across security, development, data science and business leadership. "There's no reason the CISO shouldn't be talking to the chief marketing officer," she said.

And the shift to proactive, secure-by-design thinking must start now. "The most productive applications are also the most well-protected," she said. "Because when you protect the application, you protect the experience."

And her advice for businesses still scrambling to catch up is simple: "You can't protect what you can't see."

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X