CFOtech Australia - Technology news for CFOs & financial decision-makers
Story image

How AI can help developers boost the security of new code

Mon, 24th Jun 2024

With competition growing and businesses focused on getting their offerings to market more quickly, pressure on software developers is rising. They are being pushed to generate code faster than ever while also achieving a polished end-user experience.

Unfortunately, this focus on speed is all too often coming at the expense of effective cybersecurity. Vulnerabilities are regularly making their way into software, include privilege escalations, back-door credentials, possible injection exposure and unencrypted data.

Thankfully, there are areas where AI can clearly add value. An increasing number of developer teams are using AI remediation tools to make suggestions for quick vulnerability fixes throughout the software development lifecycle (SDLC).

Such tools can assist the defence capabilities of developers, enabling an easier pathway to a ‘security-first’ mindset. However, like any new and potentially impactful innovation, they also raise potential questions that teams and organisations need to explore. Three of the key queries are:

1. Will AI entirely remove security-related tasks from developers?
The short answer to this is ‘no’. If effectively deployed, AI tools will allow developers to gain a greater awareness of the presence of vulnerabilities in their products, and then create the opportunity to eliminate them.

However, while AI can detect some issues and inconsistencies, human insight and critical thinking are still required to understand how AI recommendations align with the larger context of a project as a whole. Elements such as design and business logic flaws, insight into compliance requirements for specific data and systems, and developer-led threat modelling practices are all areas in which AI tooling will struggle to provide value.

Also, developer teams cannot blindly trust the output of AI coding and remediation assistants. ‘Hallucinations’, or incorrect answers, are quite common, and typically delivered with a high degree of confidence. Humans must conduct a thorough vetting of all answers – especially those that are security-related – to ensure recommendations are valid, and to fine-tune code for safe integration.

Ultimately, there will always be a need to for a ‘people perspective’ to anticipate and protect code from today’s sophisticated attack techniques. AI coding assistants can lend a helping hand on quick fixes and serve as formidable pair programming partners, but humans must take on the bigger-picture responsibilities of designating and enforcing security best practices, especially as they apply specifically to a company’s operations and risk profile.

2. How should training evolve to maximise the benefits of AI remediation?
Training needs to evolve to encourage developers to pursue multiple pathways for educating themselves on AI remediation and other security-enhancing AI tools, as well as comprehensive, hands-on lessons in secure coding best practices. It is certainly worth developers learning how to use tools that enhance efficiency and productivity, however it is critical that they understand how to deploy them responsibly within their tech stack. The question is always, how can they ensure AI remediation tools are leveraged to help developers excel, versus using them to overcompensate for lack of foundational security training. 

Developer training should also evolve by implementing standard measurements for developer progress, with benchmarks to compare over time how well they’re identifying and removing vulnerabilities, catching misconfigurations and reducing code-level weaknesses. If used properly, AI remediation tools will help developers become increasingly security-aware while reducing overall risk across their organisation.

The software development landscape is changing all the time, but it is fair to say that the introduction of AI assistive tooling into the standard SDLC represents a rapid shift to essentially a new way of working for many software engineers. However, it perpetuates the same issue of introducing poor coding patterns that can potentially be exploited quicker, and at greater volume, than at any other time in history.

3. How can DevSecOps providers add value to teams that use AI?
This question boils down to innovation. Teams will thrive with solutions that expand the visibility of issues and resolution capabilities during the SDLC, yet do not slow down the software development process.

AI cannot step in to do security for developers, just as it’s not entirely replacing them in the coding process itself. No matter how many more AI advancements emerge, these tools will never deliver 100% foolproof answers about vulnerabilities and fixes. They can, however, perform critical roles within the greater picture of a total ‘security-first’ culture – one that depends equally on technology and human perspectives. Once teams undergo required training and on-the-job knowledge-building, they will indeed find themselves creating products swiftly, without compromising so much on security.

By keeping these questions in mind, development teams will be well prepared to harness the power of AI tools while at the same time ensuring effective security remains in place. AI will clearly take on a much more important role in software development, but will work with humans rather than replace them.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X