Why AGI Fears Are Mismatched With Reality

This is The AI Advent Series, a five-week run of practical reflections on how AI is actually being used day to day. Each piece looks at one theme that keeps coming up in my work, in our bootcamps, and in real conversations with people trying to make sense of this technology.

Introduction

If there’s one topic that inflates the drama around AI more than any other, it’s AGI — artificial general intelligence. You only need to scroll social media for ten minutes before someone’s predicting robot uprisings, machines taking over, or the end of human decision-making altogether.

The trouble is, those fears are built on a very shaky understanding of how these systems actually work. Generative AI is impressive, though it’s still nowhere near human-level intelligence. And more importantly, the leap from today’s tools to true AGI isn’t a gentle evolutionary step. It’s a chasm.

During my interview this week, I was asked about my thoughts on AGI and “superintelligence”, and you could almost see the relief on her face when I explained why the fears simply don’t line up with the reality of what’s possible today. We need balanced conversations, not sci-fi fuel.

So let’s unpack why AGI fear has drifted so far from the real constraints.


Generative AI Isn’t Close To Human Intelligence

The most important point is the simplest: Generative AI doesn’t think. It doesn’t understand. It doesn’t choose. It predicts.

Every answer it gives is the result of matching patterns, not forming intentions.

It can draft content, summarise, explain, and generate ideas, though all of that is happening within the boundaries of training data and user input. There is no internal world. No awareness. No self-direction.

People often forget that something feeling clever isn’t the same as it being clever. Much of what looks intelligent is simply the model stitching together concepts at high speed.

Human intelligence is messy, emotional, embodied, and inconsistent. AI is none of those things.


Compute Limits Are a Brick Wall, Not a Small Hurdle

Here’s the part hardly anyone talks about, though it’s absolutely crucial.

To simulate even a fraction of human cognition, you need mind-bending amounts of compute and electricity. And we’re nowhere near having that.

A great example is what’s happening with AI video models. A single 15-second Sora clip uses so much compute that OpenAI has had to throttle how many people can create them. Not because they don’t want to — because the infrastructure simply can’t support it.

If generating a short video pushes global compute limits, the idea of simulating a full human mind borders on fantasy.

We’d need:

  • Far more powerful chips
  • Far more efficient training architectures
  • Power sources that don’t exist yet
  • Infrastructure that hasn’t been invented

This isn’t a matter of “another few years of progress”. It’s several scientific breakthroughs away.

Auto-generated description: A robotic dinosaur stands with fists raised, featuring glasses and red laser eyes.

Superintelligence Isn’t Just Unlikely, It’s Logistically Impossible (For Now)

Superintelligence — the idea of AI vastly surpassing human intellect — relies on an assumption that machines can scale linearly with resources.

But they can’t. Biological brains are absurdly efficient. We run on around 20 watts of power. Current AI models need data centres. Even the best ones rely on:

  • Thousands of GPUs
  • Continuous fine-tuning
  • Manual human oversight
  • Enormous energy consumption

The maths simply doesn’t work.

Even if you ignore the philosophical challenges (like consciousness), the physical constraints alone make superintelligence a distant concept. Not decades away. Lifetimes.

Most fears are based on fictional leaps rather than scientific trajectories.


Free Will and Consciousness Aren’t Programmable

One concept I mentioned in the interview that resonated strongly was the difference between prediction and consciousness.

If I say, “Don’t think of an elephant”, you probably just thought of an elephant. You didn’t choose to do that in advance — your mind responded in a way machines cannot.

AI doesn’t have spontaneous thought. It doesn’t have imagination. It doesn’t have an internal sense of self. It can’t want. It can’t resist.

Machines only operate within constraints defined by humans. Even “emergent behaviour” is still the product of probabilistic patterns, not self-directed intention.

We don’t understand our own consciousness well enough to replicate it, and we certainly can’t engineer something we don’t understand.


What People Actually Fear Is Agency — And AI Doesn’t Have It

When people say they’re scared of AGI, what they really mean is that they’re scared of losing control. They imagine AI making decisions autonomously, disregarding human values, or acting with independent goals.

But current AI:

  • Has no goals
  • Has no preferences
  • Has no internal motivations
  • Can’t initiate action on its own
  • Can’t decide to operate outside its inputs

It’s software. Powerful software, yes, but still software.

The fear is understandable, though misplaced. We should be far more concerned about how humans choose to deploy AI, not whether AI will somehow gain agency.


Where the Real Risks Actually Are

There are real challenges with AI, but they look nothing like science fiction.

The genuine risks include:

  • Human error
  • Overreliance
  • Misuse
  • Hallucinations
  • Poor governance
  • Training models on biased or flawed data
  • Bad decisions passed off as “the AI’s fault”

All of these are human problems with human solutions.

The biggest risk isn’t AI becoming conscious. It’s people using it without understanding how it works.


So Why Does AGI Fear Spread So Easily?

A few reasons:

1. Science fiction is culturally powerful. Most people’s understanding of AI starts with Hollywood, not hardware.

2. News headlines reward drama. “AI will replace us all” spreads faster than “AI is computationally constrained”.

3. People lack visibility of the limitations. Most never see behind the curtain. They don’t know how fragile these systems actually are.

4. Humans project human traits onto non-human systems. If something speaks fluently, we assume it thinks fluently.

None of these are technical issues. They’re psychological and cultural.


Conclusion

AGI fear comes from a place of uncertainty, not evidence. The reality is far more grounded and far less dramatic.

We’re dealing with extraordinary tools that can enhance human capability, not independent minds that threaten it. Generative AI will reshape jobs, challenge norms, and change workflows, though it won’t suddenly develop consciousness or outgrow human oversight.

The real task is learning how to use these systems well. Because the danger isn’t rogue intelligence — it’s poorly used intelligence.

AGI may one day be possible, but it isn’t happening soon, and it certainly isn’t happening by accident.

For now, and for a long time to come, AI remains what it has always been: a tool that becomes powerful only in proportion to the person using it.

The AI Advent Series