CFOtech Australia - Technology news for CFOs & financial decision-makers
Story image
Gartner finds 34% of organisations are using AI security tools
Wed, 20th Sep 2023

According to a new survey from Gartner, 34% of organisations are either already using or implementing artificial intelligence (AI) application security tools to mitigate the accompanying risks of generative AI (GenAI). Over half (56%) of respondents also explored such solutions.

The Gartner Peer Community survey was conducted from April 1 to April 7 among 150 IT and information security leaders at organisations where GenAI or foundational models are in use, in plans for use, or being explored.

26% of survey respondents said they are implementing or using privacy-enhancing technologies (PETs), ModelOps (25%) or model monitoring.

“IT and security and risk management leaders must, in addition to implementing security tools, consider supporting an enterprise-wide strategy for AI TRiSM (trust, risk and security management),” says Avivah Litan, distinguished vice president analyst at Gartner. “AI TRiSM manages data and process flows between users and companies who host generative AI foundation models, and must be a continuous effort, not a one-off exercise to continuously protect an organisation.”

While 93% of IT and security leaders surveyed said they are at least somewhat involved in their organisation’s GenAI security and risk management efforts, only 24% said they own this responsibility.

Among the respondents who do not own the responsibility for GenAI security and/or risk management, 44% reported that the ultimate responsibility for GenAI security rested with IT. For 20% of respondents, their organisation's governance, risk, and compliance departments owned the responsibility.

The risks associated with Generative AI are significant, continuous and will constantly evolve. Survey respondents indicated that undesirable outputs and insecure code are among their top-of-mind risks when using Generative AI. 57% of respondents are concerned about leaked secrets in AI-generated code. 58% of respondents are concerned about incorrect or biased outputs.

“Organisations that don’t manage AI risk will witness their models not performing as intended and, in the worst case, can cause human or property damage,” says Litan. “This will result in security failures, financial and reputational loss, and harm to individuals from incorrect, manipulated, unethical or biased outcomes. AI malperformance can also cause organisations to make poor business decisions.”

Meanwhile, Gartner says its Gartner for Cybersecurity Leaders equips security leaders with the tools to help reframe roles, align security strategy to business objectives and build programs to balance protection with the organisation's needs. 

According to Gartner, organisations must change ways to achieve secure employee behaviours. Legacy approaches of delivering curriculum-based, awareness-centric programs are no longer effective. CISOs must embrace a human-centric approach to drive secure, risk-informed decision-making and take action to minimise risk exposure. They must map the organisation's cybersecurity programs. They must deploy a systematic approach to their information security programs. They can use Gartner IT Score for security and risk management to spot what to prioritise and where and how to improve, says the company.