CFOtech Australia - Technology news for CFOs & financial decision-makers

Exclusive: Cloudera urges banks to rethink data for AI

Mon, 25th Aug 2025

Artificial intelligence is sweeping through financial services. But despite the headlines and hype, many projects are stalling before they deliver measurable results.

Speaking to TechDay during a recent interview, Adrien Chenailler, Global Director for AI Industry Solutions and Financial Services at Cloudera, warned that banks risk falling behind if they fail to combine technical governance with practical business focus.

"There was definitely some very interesting numbers in terms of security, in terms of adoption of AI in the Australian workforce," he said, referencing recent Ipsos findings. "Half of them are using AI, but more than three quarters still think there is a potential threat. We need to bring back some trust."

What are the three big challenges?

Chenailler outlined three key challenges facing banks and insurers: business return on investment, technical complexity and regulatory compliance.

"CIOs and CEOs are still wondering, what's my ROI on these initiatives? If the first use cases are not successful, you're not likely to invest in new ones," he explained.

From a technical standpoint, fragmented data environments remain a major hurdle. "AI is catching banks in the middle of their migration to the cloud. Their data landscape is scattered across on-premises and multiple cloud providers, and this makes generating value out of AI a lot more difficult."

The third challenge is regulation. "Every country, and particularly in Australia, has requirements around data residency, data reliance and personal data protection. Multinational corporations have to comply with many different rules. It's not only about more regulation - it's also about doing the right thing with your data."

Governance as a differentiator

Australia's Privacy Act reforms will demand greater transparency, particularly around automated decision-making. Chenailler believes this is an opportunity rather than a burden.

"Going beyond regulation, it's about having the right level of observability on your data flows across the organisation," he said. "If you have good data lineage, if you have good access policies, and clarity on where workloads are running, then you will be compliant with most regulations."

He pointed to Europe's Digital Operational Resilience Act as a positive example. "Lineage is such a good practice. In tech companies, every dataset could be linked. In banks, because of legacy, sometimes we don't have this. That's really where it needs to converge."

The danger of overhyping AI agents

Analysts predict that over 40 per cent of agentic AI projects could be abandoned by 2027. Chenailler is unsurprised.

"Almost 10 years ago people were already saying 80 per cent of AI projects were dropped before reaching production. It's not a fatality, it's just a working principle. If you look at the hype, you're not going to make any business sense of it."

Instead, he sees "boring" use cases as the real winners. "By boring, I mean not the shiny chatbot facing customers. I mean background operational processes, compliance checks, nitty gritty details that now have a chance to become automated. That's really my key motto: if you want ROI, look at the processes themselves."

Real-time data is non-negotiable

The shift to instant payments has transformed risk management. "Forty years ago, a cheque would take days to clear. Now, money moves in 0.1 seconds. Fraud detection is no longer a process you can take time over," Chenailler said.

But the same applies to customer engagement. "Are you providing the right recommendation on their app? If the customer is at the airport, do they have travel insurance? Real-time data is needed not just for safety, but to enable the right experience."

He cited Singaporean banks as leaders in fraud prevention and personalised recommendations, with insurers in Australia exploring AI to speed up claim processing. "We are no longer waiting two months for a claim to be approved," he said.

Building trust and transparency

Yet while AI is now embedded in banking, public awareness lags behind. Only 38 per cent of Australians know which products or services are using AI.

"Every time you open your banking app, at least four AI systems kick in. AI is best when it's transparent to the user. But where there's a risk of confusion, we must be clear," Chenailler said. "We should disclose when someone is interacting with an AI, but for other machine learning systems, it's not always relevant for the user."

Upcoming privacy laws will require banks to explain how automated decisions are made. "Step one is good data lineage. Step two is explainable AI. And then you need to think, what kind of user experience do you want? A declined mortgage has bigger impact than a marketing pop-up, so the level of explanation should match."

Striking the balance

As financial institutions expand automation in areas from fraud detection to claims, human oversight remains vital.

"Fraud patterns are incredibly fast to change. You retrain a model, and one month later a new pattern emerges. That's why there's always people looking at the system, trying to make sure they are making the right decision," Chenailler said.

For him, the future lies not in more regulation, nor in glamorous front-end chatbots, but in responsible data strategy and operational change.

"The way to enterprise AI is the same as the way to enterprise data management," he explained. "It's not just about AI, it's about rethinking your entire data strategy. That's the real challenge - and the real opportunity."