We Tested the Government's AI Skills Hub: Five Critical Flaws You Need to Know

TL;DR

Scott Quilter and Erica Farmer logged into the government’s new AI Skills Hub and put it through real-world user testing. What we found: vendor courses that teach the wrong tools for your workplace, no space to learn by play, beginner courses that assume technical knowledge, and a complete disconnect from how 800 million people are successfully learning AI right now. The Hub trains individuals but ignores organisations, creating frustrated employees returning to unprepared workplaces. Most concerning: zero learning science behind the design. This isn’t about protecting our businesses—we want the Hub to succeed. But success requires honest feedback from practitioners who’ve actually tested it. Here are the five critical flaws that need fixing, and what the government should do instead.


A practitioner’s view from real-world user testing

By Scott Quilter (Techosaurus LTD) and Erica Farmer (Quantum Rise)

Introduction: Beyond the Headlines

When the UK Government announced its expanded AI Skills Hub on 28 January 2026, the headlines were impressive: 10 million workers to be upskilled by 2030, £27 million in funding, partnerships with tech giants like Microsoft, Google, and IBM. It sounded like exactly what Britain needs.

But as practitioners who deliver AI training day in and day out, Erica Farmer and I knew we needed to look beyond the press release. So we did what any responsible educators would do: we logged in, tested the platform, and put it through its paces as real users would experience it.

What we found concerns us deeply.

This isn’t about being defensive or protecting our businesses. This is about ensuring that an initiative with enormous potential doesn’t fail because fundamental design flaws went unchallenged. We both want the Hub to succeed. But success requires honest feedback from people in the trenches.

Here’s what we discovered.

Auto-generated description: Two people are smiling during an online event called AI Skills Boost, which focuses on critically examining overlooked aspects.

Flaw #1: The Bottom-Up Collision Problem

AI is being adopted in reverse of every other technology wave in history, and the Hub completely misses this.

Traditionally, new technology follows a top-down path: military or enterprise adoption first, then it filters down to consumers. Think computers, the internet, GPS, even mobile phones. Big organisations had these technologies for years before they became accessible to everyday users.

AI has flipped this entirely.

800 million people actively use ChatGPT. They’re asking it for recipe ideas, travel planning advice, help with homework, creative writing support. They’re learning AI through play, in their personal lives, at their own pace. Then they’re bringing that knowledge into their workplaces.

This creates a unique challenge: employees are upskilling themselves, but most organisations aren’t ready for them.

Why the Hub Makes This Worse

The AI Skills Hub trains individuals. It gives them courses, knowledge, and a virtual badge. Then it sends them back to workplaces that:

  • Have no AI strategy
  • Have no clear policies on what tools employees can use
  • Have leaders who don’t understand what their staff just learned
  • Have no support systems for implementation

We heard this firsthand. One of our training associates completed his “personal pathway” on the Hub, found some courses he was interested in, and then watched them disappear, replaced by “random paid stuff” — Python courses, Azure certifications, technical content he had no interest in or need for.

His response? “I’m an everyday person. I want to know how to use AI. I don’t want to be a programmer.”

The Hub trains the individual. But it ignores the organisation. And without organisational readiness, individual training evaporates.

Imagine an employee completes a course, returns to work excited, tells their manager what they’ve learned, and the manager either:

  • Has no idea what they’re talking about
  • Dismisses it as unimportant
  • Says “That’s nice, but we don’t use that here”
  • Creates barriers because of security concerns no one’s addressed

The wind goes out of their sails. The learning becomes dormant. Within weeks, it’s forgotten.

This is the bottom-up collision: self-motivated individuals crashing into unprepared organisations.


Flaw #2: Platform Lock-In Creates Wasted Learning

Here’s a scenario that will play out thousands of times if the Hub continues as designed:

Sarah, a marketing manager, signs up for the Hub. She selects “beginner” and gets directed to Google’s “Introduction to Generative AI” course. She completes it, learns Google’s tools, becomes familiar with Gemini, understands how Google’s AI ecosystem works.

She returns to work on Monday, excited to apply what she’s learned.

Her workplace uses Microsoft Copilot.

Everything she just learned is now partially irrelevant.

This isn’t about criticising Google’s course quality. It’s about recognising that when courses are provided by vendors, they naturally teach their own ecosystems. Microsoft courses teach Microsoft tools. Google courses teach Google tools. Salesforce teaches Einstein. IBM teaches Watson.

The Transferability Problem

Effective AI training should teach platform-agnostic principles:

  • How to write effective prompts regardless of which tool you use
  • How to evaluate AI outputs critically
  • How to identify appropriate use cases in your work
  • How to handle AI ethically and responsibly
  • How to integrate AI into existing workflows

Instead, the Hub’s courses teach vendor-specific implementations. This creates:

  • Confusion when workplace tools don’t match training tools
  • Frustration when learned skills don’t transfer
  • Wasted time requiring re-learning for different platforms
  • Lock-in to specific vendor ecosystems

An employee who learns deeply on one platform may resist switching to another, even if that alternative better suits their organisation’s needs.

The Quality Problem

During our testing, we also discovered concerning issues with content quality and consistency. Erica found posts from other practitioners who had identified actual inaccuracies and inconsistencies in course content. When you’re aggregating materials from multiple vendors, each with their own spin and terminology, coherence suffers.

The “pathway” concept implies a structured learning journey. What you actually get is a hodgepodge of disconnected courses from different providers, with no clear progression and conflicting approaches.


Flaw #3: Where’s the Learning by Play?

I use a simple analogy: If I handed you a chainsaw and a manual, would you be a tree surgeon?

Of course not. You’d need to practice, experiment, make mistakes in a safe environment, gradually build confidence and competence. Reading about chainsaws doesn’t make you qualified to operate one.

Yet this is exactly what the Hub does with AI.

How People Actually Learn AI Successfully

The 800 million ChatGPT users didn’t complete formal courses before they started. They:

  • Played with it
  • Asked silly questions
  • Experimented with different prompts
  • Saw what worked and what didn’t
  • Shared discoveries with friends
  • Built confidence gradually
  • Developed intuition through trial and error

This is learning by play, and it’s how humans naturally acquire new skills.

Our training programmes embrace this. We start people playing with AI for personal tasks — planning a trip, getting recipe ideas, writing a letter — before we ever introduce workplace applications. This builds familiarity and confidence in a low-stakes environment.

What the Hub Offers Instead

Formal courses. Structured modules. Technical terminology. “Computational thinking.” “Critical thinking.” Theoretical frameworks.

For a beginner terrified that AI might take their job, landing on a platform that immediately throws “large language models,” “transformers,” and “neural networks” at them is not encouraging. It’s intimidating.

Erica’s experience was telling: she selected “beginner” pathway, expecting basics like “What is generative AI?” and “How might this help you in everyday life?” Instead, she got computational thinking and technical frameworks that even she — as an AI education expert — found inappropriately advanced for true beginners.

Where is the sandbox? Where’s the space to play, experiment, and discover? Where’s the encouragement to just try things and see what happens?

It’s not there. And without it, completion rates will be dismal, and actual skill development even more so.


Flaw #4: User Experience Reveals Absence of Learning Science

When we talk about “learning science,” we mean understanding how human brains actually acquire, retain, and apply new knowledge. It’s a discipline, backed by decades of research into cognitive psychology, educational theory, and neuroscience.

There appears to be no learning science whatsoever behind the Hub’s design.

Real User Testing: What Actually Happens

Our associate — a non-technical professional, exactly the target audience — logged into the platform. Here’s his journey:

  1. Registration: Straightforward enough
  2. Personal Pathway Questions: Five high-level questions that didn’t really capture his needs or preferences
  3. Course Selection: Presented with courses that seemed promising
  4. The Disappearing Act: After completing his pathway, the free courses he’d seen disappeared
  5. The Substitute: Replaced with paid courses for Python programming, Azure certifications, and technical content he had no interest in
  6. The Exit: He left frustrated and confused

His feedback: “The website feels messy. I’m an everyday person. I don’t want to be a programmer. I just want to understand how to use AI.”

Multiple Barriers to Entry

Erica identified what she calls “subliminal hurdles” — barriers that users don’t even realise exist:

Assumption of E-Learning Preference: The platform assumes everyone learns best through self-paced online courses. Research consistently shows that completion rates for online courses are 5-15%, compared to 70-90% for instructor-led training. Yet the Hub offers only the least effective delivery method.

Poor Pathway Logic: The “personal pathway” concept suggests tailored recommendations. In practice, it’s a few generic questions that don’t meaningfully personalise the experience.

Content Mismatch: “Beginner” content includes advanced concepts. “Free” content isn’t consistently free. “AI Foundations” courses dive into technical architecture rather than practical application.

No Community: Learning happens in community. People need peer support, shared discovery, and social accountability. The Hub provides none of this.

No Ongoing Support: Courses are one-off experiences. No follow-up, no coaching, no troubleshooting when users try to apply learning.

What Learning Science Would Look Like

If educational designers had been involved, we’d see:

  • Diagnostic assessment that actually identifies knowledge level and learning style
  • Scaffolded progression from concrete examples to abstract concepts
  • Immediate application opportunities within the training
  • Spaced repetition and reinforcement of key concepts
  • Social learning elements and community building
  • Formative assessment to check understanding before progression
  • Multiple modalities (visual, auditory, kinaesthetic) to accommodate different learners

Instead, we have a content aggregator with a thin veneer of personalisation.


Flaw #5: Missing How People Actually Use AI

Here’s a statistic that should fundamentally reshape the Hub’s approach:

The number one use case for Microsoft Copilot is mental health and wellbeing queries.

Not productivity. Not business efficiency. Not automation. Mental health.

According to BBC reporting, there’s been a 75% increase in “trauma dumping” to AI tools. People are using AI to manage anxiety, process emotions, and seek support for mental wellbeing.

Why? Because AI tools provide:

  • Non-judgmental listening
  • 24/7 availability
  • Privacy and confidentiality
  • Natural language interaction
  • Immediate responses

Where Is This in the Training?

Nowhere.

The Hub’s courses focus on:

  • Business transformation
  • Productivity gains
  • Automation opportunities
  • Technical capabilities
  • Organisational readiness

They completely ignore how people are actually using AI in their daily lives.

This reveals a fundamental disconnect between the programme designers and the lived reality of AI users. The courses teach what vendors want to sell and what policymakers want to hear about (economic growth, productivity gains). They don’t teach what people actually need.

The Real Use Cases Being Ignored

Beyond mental health support, people are using AI for:

  • Personal development: Language learning, skill building, creative exploration
  • Everyday problem-solving: Recipe planning, travel itineraries, gift ideas
  • Emotional support: Processing difficult conversations, preparing for challenges
  • Creative expression: Writing, art generation, music creation
  • Learning support: Understanding complex topics, homework help, research assistance

These are the use cases driving the 800 million ChatGPT users. These are how people build familiarity, comfort, and competence with AI.

Yet the Hub starts with business use cases and technical frameworks.

It’s backwards. If you want people to successfully integrate AI into their professional lives, you first need them comfortable using it in their personal lives. That’s where the confidence gets built. That’s where the intuition develops.


What Actually Works: The Alternative Model

We’re not just critics. We have skin in the game. Between us, we’ve trained thousands of people in AI adoption, and we’ve learned what actually works.

The Associates Model

One of our key innovations at Techosaurus is what we call the “Associates Model.” Here’s how it works:

We train non-technical professionals to become AI trainers in their own domains. An HR professional learns to use AI in HR contexts, then teaches other HR professionals. A sales manager learns AI for sales, then coaches their sales team.

Why this matters: Domain expertise is non-negotiable. A software engineer from Microsoft cannot effectively teach a geography teacher how to use AI for lesson planning. They don’t understand the pedagogy, the curriculum constraints, the classroom realities.

The Hub has zero domain experts. Only tech companies.

Learning by Play in Practice

Our bootcamps start with personal use cases:

  • Plan your next holiday using AI
  • Get recipe ideas for what’s in your fridge
  • Write a thoughtful message to a friend
  • Learn about a hobby you’re interested in

Only after people are comfortable and confident with these low-stakes applications do we introduce workplace use cases. By then, they’ve built intuition. They understand what AI can and can’t do. They’re ready to experiment.

The Hub throws people into technical courses and business transformation frameworks with no warm-up, no play, no confidence building.

Organisational Readiness Alongside Individual Learning

Our programmes engage leadership teams. We help organisations develop:

  • Clear AI usage policies
  • Permission structures for experimentation
  • Guidelines for appropriate use
  • Data privacy and security protocols
  • Support systems for ongoing learning

We train individuals and prepare organisations. The Hub does only half the equation, then wonders why adoption doesn’t follow.

Ongoing Support and Community

Our learners join communities of practice. They have access to troubleshooting support. They attend follow-up sessions. They share discoveries and challenges with peers.

Learning isn’t an event; it’s a process. The Hub treats it as an event.


The Fundamental Category Error

The AI Skills Hub has made a category error: it has confused information provision with capability building.

Giving people access to courses is information provision. That’s easy. You aggregate content, build a website, and launch.

Capability building is something else entirely. It requires:

  • Understanding how people learn
  • Removing barriers to application
  • Providing ongoing support
  • Building community
  • Addressing organisational context
  • Measuring behaviour change, not course completion

The Hub has optimised for information provision. It will generate impressive statistics: enrollments, course completions, badges issued.

It will not generate genuine business transformation.

And that’s what Britain actually needs.


A Call for Honest Iteration

We’re not calling for the Hub to be scrapped. We’re calling for honesty about its current limitations and willingness to iterate based on real user feedback.

What the Government Should Do

1. Conduct Proper User Testing
Not with policymakers or tech company representatives. With actual target users: SME employees, non-technical workers, people anxious about AI. Watch them use the platform. See where they get stuck. Hear their frustrations.

2. Engage Learning Designers
Bring in educational psychologists, instructional designers, adult learning specialists. Apply actual learning science to the platform design.

3. Include Domain Experts
Partner with sector-specific professional bodies and training providers. Get HR experts designing HR AI training. Get educators designing education AI training.

4. Test Alternative Delivery Models
Fund pilot programmes for face-to-face bootcamps, blended learning, cohort-based training. Measure outcomes. See what actually works.

5. Address Organisational Adoption
Create resources for employers, not just employees. Help organisations become ready for AI-enabled workers.

6. Measure What Matters
Track workplace implementation rates, productivity improvements, sustained behaviour change six months post-training. Not just course completions.

What Businesses Should Do

If you’re considering directing your staff to the AI Skills Hub:

Do use it as a supplement to a broader AI adoption strategy, not as the strategy itself.

Don’t assume that course completion equals workplace capability.

Do invest in organisational readiness alongside individual learning.

Don’t skip the crucial work of developing clear policies and permission structures.

Do seek out providers who offer hands-on, domain-specific, face-to-face training.

Don’t rely solely on vendor courses that may not align with your chosen tools.


Conclusion: Time for Honest Conversation

We’re putting this critique out there knowing some will accuse us of being defensive or competitive. That’s fine. We’d rather be accused of that than stay silent whilst an initiative with genuine potential fails due to avoidable design flaws.

The government’s ambition is admirable. The goal of upskilling 10 million workers is achievable and necessary. But ambition without effective execution is just expensive theatre.

We’ve tested the platform. We’ve seen the problems. We’re raising our hands and saying: this needs significant improvement before it will achieve its stated goals.

We want the Hub to succeed. That’s why we’re being honest about what’s not working.

The question is: will policymakers listen to practitioners, or will they stick with the original design until inevitably poor outcomes force a rethink?

We hope it’s the former. Britain’s AI competitiveness depends on it.


Video

This article is based on a video conversation recorded 29 January 2026

About the Authors

Scott Quilter is founder of Techosaurus LTD, an award-winning EdTech company named EdTech Provider of the Year (Southwest) and Business of the Year for Best Use of Technology (Somerset) in 2025. Techosaurus has trained over 150 businesses through government-funded Generative AI Skills Bootcamps and Automation Skills Bootcamps across the Southwest UK.

Erica Farmer is a learning and development expert, keynote speaker, and consultant specialising in AI, future skills, and digital learning. She works with organisations across the UK to develop human-centred AI adoption strategies.


Want to discuss your organisation’s AI adoption strategy? Connect with Scott at Techosaurus or Erica at Quantum Rise and EricaFarmer.AI

Generative AI