Nearly half exclude cybersecurity from AI projects, survey says
A recent study by ISACA has highlighted that nearly half of companies exclude cybersecurity teams during the development and implementation of AI solutions.
The 2024 State of Cybersecurity survey report from ISACA, a professional association focused on advancing trust in technology, found that only 26 percent of cybersecurity professionals or teams in Oceania are involved in formulating policy governing the use of AI technology. Additionally, 45 percent of these professionals are not involved in the development, onboarding, or implementation of AI solutions.
This annual study, sponsored by Adobe, gathered responses from more than 1,800 global cybersecurity professionals, focusing on the workforce and threat landscape. It revealed that security teams in Oceania primarily use AI for automating threat detection and response, with 36 percent reporting its application in this area compared to the global average of 28 percent. Endpoint security also remains a significant focus, with 33 percent of the regional respondents using AI in this capacity compared to 27 percent globally. Other uses include automating routine security tasks and fraud detection, although at slightly lower rates than global averages.
Jamie Norton, an Australia-based cybersecurity expert and member of ISACA's Board of Directors, emphasised the importance of involving cybersecurity professionals in AI policy development. "ISACA's findings reveal a significant gap—only around a quarter of cybersecurity professionals in Oceania are involved in AI policy development, a concerning statistic given the increasing presence of AI technologies across industries," said Mr Norton. "The integration of AI into cybersecurity and broader enterprise solutions must be guided by responsible policies. Cyber professionals are essential in this process to ensure that AI is implemented securely, ethically and in compliance with regulatory standards. Without their expertise, organisations are exposed to unnecessary vulnerabilities."
In a bid to assist cybersecurity professionals in engaging more effectively with AI policy creation and integration, ISACA has published a paper titled "Considerations for Implementing a Generative Artificial Intelligence Policy." This resource and other certifications aim to equip cybersecurity teams with the necessary tools and insights.
"Cybersecurity teams are uniquely positioned to develop and safeguard AI systems, but it's important that we equip them with the tools to navigate this transformative technology," Norton added. "ISACA's AI policy paper offers a valuable roadmap, addressing critical questions such as how to secure AI systems, adhere to ethical principles and set acceptable terms of use."
ISACA has also been developing additional AI resources, including a white paper on the EU AI Act, which provides guidance on forthcoming regulations that will apply from August 2026. The white paper, "Understanding the EU AI Act: Requirements and Next Steps," outlines necessary steps like instituting audits, tracing activities, and designating an AI lead to oversee AI implementations and strategy.
Another resource provided by ISACA focuses on authentication in the era of deepfakes. The resource, "Examining Authentication in the Deepfake Era," discusses the advantages and challenges of using AI-driven adaptive authentication. While AI can bolster security by tailoring authentication to individual behaviours, it also poses risks due to susceptibility to adversarial attacks and biases, highlighting the need for careful consideration of ethical and privacy concerns.