Commonwealth Bank sets out responsible AI blueprint
Commonwealth Bank has published a report detailing how it designs, deploys, and governs artificial intelligence across the organisation, positioning itself as the first Australian bank to set out its approach in this level of detail.
The document reviews the bank's AI work over the past decade and provides examples of systems used across its operations. It emphasises responsible practices and risk management as financial services firms face growing scrutiny over automated decision-making, data use, and model security.
The bank describes AI as a tool for customer protection and operational controls. Fraud and scam prevention is a key application area, alongside cyber security and the detection of abusive language in transaction descriptions. It also uses AI to improve customer experience by personalising interactions and making them more relevant.
The report arrives as Australian businesses and governments debate AI's economic impact and the safeguards needed for its adoption. Industry estimates cited in the report suggest AI could add $45 billion to $115 billion a year to the Australian economy by 2030, underlining both the investment at stake and the incentives for companies to move quickly.
Policymakers have also set expectations for responsible adoption. The Australian Government's National AI Plan highlights the role of business in adopting the technology responsibly, putting large institutions such as banks under particular scrutiny given their access to sensitive personal and financial data.
Chief executive Matt Comyn said the bank published the report in response to stakeholder interest in how it uses AI and manages associated risks.
"We've heard that stakeholders want to better understand how AI is being used across the Bank and our approach to managing the risks associated with its adoption. This report outlines our progress and the safeguards we have in place to support responsible use."
The bank presents the report as a contribution to broader industry discussions, as lenders face shared challenges around governance, model oversight, and accountability. Regulators globally are paying closer attention to technology risk management and to the risk that automation can amplify harm at scale if controls fail.
Comyn also linked AI adoption to workforce development, pointing to expanded training so staff can work with AI systems in day-to-day roles. "Our people are central to delivering for our customers, which is why we're expanding skilling programs to build confidence and capability," he said.
Risk Focus
Alex Matthews, executive general manager and lead on the report, said the bank sees opportunity in AI but remains cautious about the risks. He put trust at the centre of its approach, reflecting the sector's reliance on customer confidence and strong controls against financial crime and cyber threats.
"As Australia's largest bank, trust is fundamental to how we use AI. Our approach is focused on our risk management foundations and guided by our AI principles."
Matthews also pointed to the pace of change in banking technology. "Banking technology has evolved significantly in the past 30 years, and we are using AI to help reduce scams and fraud, protect against phishing, and deliver more tailored and relevant experiences for our customers," he said.
The focus on phishing and scams reflects a growing problem for Australian consumers and financial institutions. Banks have increased investment in detection tools and customer warnings as criminals use social engineering and impersonation tactics across messaging and email. AI-based approaches often rely on pattern detection and real-time alerts, though firms must also manage false positives that can disrupt legitimate payments and customer activity.
The bank is also looking beyond internal controls and technology build-out. Matthews cited collaboration with government, universities, and business as part of its approach, aligning with national policy discussions regarding the development of domestic AI skills and research capacity.
"We recognise the importance of collaboration with government, universities and business to foster innovation, build capability and support the responsible adoption of AI. We will continue to refine our approach as the technology evolves."
Longitudinal Study
Alongside the report, Commonwealth Bank has launched a longitudinal research initiative with Melbourne Business School to track how Australians perceive, use, and trust AI in banking over time. The study signals a focus on customer sentiment and confidence as banks introduce more automation in both customer-facing services and back-office controls.
For large banks, trust goes beyond product design. It includes transparency around automated processes, how customer data is handled, the security of systems against manipulation, and governance structures that clarify accountability when AI-driven decisions affect outcomes.
The report adds to a growing body of disclosure in financial services, where institutions increasingly describe governance arrangements, internal principles, and oversight models as they deploy AI at scale. Commonwealth Bank said it will continue refining its approach as the technology changes and expectations from customers, regulators, and other stakeholders evolve.