CFOtech Australia - Technology news for CFOs & financial decision-makers
Story image

How Generative Artificial Intelligence is reshaping cloud security

Fri, 1st Mar 2024

As the software development process has evolved in recent years, the approach taken to security has had to change. Now, the rapid rise of generative AI tools is leading to these changes becoming even more extensive.

Much of the change in security processes stems from the increasing shift of applications to cloud platforms. While many legacy applications have required alternations to operate in the cloud, the latest generation – known as 'cloud native' – were built from day one for a hosted environment.

In the past, software applications tended to be built using compact supply chains and based primarily on proprietary code. Newer applications, however, are usually written by larger numbers of people and make use of open-source code. They are also designed from the ground up to run on cloud platforms and use containers as the means of delivery.

For security teams, this trend has necessitated a shift from a focus on traditional cloud security to what is known as Application Security Posture Management (ASPM). ASPM is designed to achieve strong security for cloud-native applications regardless of the cloud platform on which they are running.

Gathering security data
For this reason, when security teams think about modern applications, cloud almost becomes a subset of the overall mix. Therefore, the vulnerabilities and risks associated with these modern applications require context to be derived from different parts of the overall application stack.

To fully understand potential vulnerabilities, details need to be gathered from a range of locations, including the cloud and looked at holistically. This examination should include things such as software composition analysis and static code analysis which together can help in the creation of an overall risk profile.

Security teams need to be collecting telemetry information and event logs from across the entirety of their cloud infrastructure and applications. This can be challenging when the mix of what is being used is constantly changing, and applications are evolving. The time taken to do this should also not be at the expense of high-level tasks such as risk evaluation and vulnerability assessments.

The role of generative AI
As organisations increasingly succeed at gathering all this information and storing it in a common format in a single location, the opportunity arises to put a rapidly developing array of artificial intelligence (AI)-powered analysis tools to work.

These generative-AI tools will allow security teams to ask questions of the data in plain English without first having to have a deep knowledge of the traditional querying formats that were required or the underlying data structure itself.

As the tools continue to evolve, the value they will deliver to security teams will continue to increase. Their ability to spot threats and malicious activity within massive amounts of log and telemetry data will make them an invaluable support resource for teams tasked with keeping cloud-based application stacks operational and secure.

Some teams are also making use of generative AI tools for the creation of computer code. Rather than starting with a blank screen, developers can describe the code they need and have the tools write it.

While this ability will be very appealing, it does come with some constraints. The current generation of AI tools has been found to create inaccurate outputs or 'hallucinations'. If this occurs when code is being generated, the result could be serious security flaws.

To achieve the best results, software developers should take advantage of generative AI tools to assist in the code generation process but retain oversight and undertake careful evaluation of all outputs. In this way, the velocity of code generation can be increased without overall security levels being compromised.

Security teams also need to remember that generative AI tools are also being used by cybercriminals to support their attack endeavours. From creating convincing phishing emails to creating malicious code, the tools are allowing criminals to become much faster at launching attacks and better able to target victims.

Despite these trends, generative AI has much to offer both software developers and security teams. By staying abreast of developments and understanding how the tools can be best used, they will be well placed to add maximum value to their organisations.
 

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X