CFOtech Australia - Technology news for CFOs & financial decision-makers
Hw ed porto 25

The real risk isn't AI outperforming you; It's leaders thinking it can

Today

Every second story I read about AI in the past few weeks focuses on how the technology is coming for our jobs. 

From Anthropic CEO, Dario Amodei, saying the technology could wipe out 50% of entry-level white-collar jobs in the next five years, to the founders of the startup Mechanize aiming at automating all human work, there are plenty of catastrophic perspectives floating around.  

The worry is understandable, but I wonder if we're having the wrong conversation.

There's a growing belief — or narrative — that we're on the verge of Artificial General Intelligence, or AGI. That we're just a few breakthroughs away from creating systems that can reason, plan, create, and think like us. From this, the logic flows: if machines can think like us, they can replace most of us.

But despite the predictions, there's a significant possibility that true AGI won't arrive as soon as AI companies claim, if it arrives at all. We shouldn't forget that these businesses and leaders have both an incentive and a natural bias to promote this narrative, and that it's difficult to even define what AGI actually is.

Today, AI is not general, it's not conscious, and it's not truly reasoning. It's an amazing and highly advanced pattern recognition system that excels at mimicking many humans' outputs, but it doesn't understand what anything means.

This should help us frame some important points about this conversation on the end of human work:

  • AI systems excel at jobs or tasks involving "cognitive muscle." Just as machines replaced our physical muscles in manual labour, AI can clearly handle much of the heavy cognitive lifting we do in our jobs today. It excels at tasks that require cognition but don't involve, for example, emotional intelligence, values, intent or complex relationships, particularly with humans. This is the part of our jobs that's at risk.
  • The type of AI that created the current boom — generative AI based on massive-scale computing — is imprecise by design and challenging to apply in many professional settings. AI agents can help minimise these problems, but those based on generative AI can't completely eliminate the underlying flaws of the Large Language Models they use. They need humans to implement and supervise them.

  • Even if AI creates a productivity wave and decreases the hours needed for many current jobs and tasks, we're still uncertain what companies will do with this surplus time. While some will opt to cut employees, many will use AI to improve and expand what they do. As a species, we're very good at inventing new things to do.

The real risk we're facing is believing too much in the current narrative that AI systems are more intelligent and better than us — or will be very soon. If we accept this without real evidence and grant them the autonomy they shouldn't have, we could create a genuinely problematic situation: allowing machines that aren't truly intelligent to dictate our decisions.

If we allow AI agents based on massive LLMs to make autonomous decisions we don't understand and can't investigate (another flaw of current LLM models), we risk creating a dystopian future — not one where superior intelligence eliminates us, but where we're harmed by systems simulating it. That would be very stupid.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X