Unleashing AI in Your Business: Everything You Missed (or Want to Relive!) from my Tech Update at Digital Hub Yeovil, May 2025
Saturday, May 24, 2025
TLDR
Microsoft’s latest Work Trend Index paints a tough picture: 80% of us say we don’t have the time or energy to do our jobs well, we’re hit by 275 interruptions a day, yet 53% of leaders still expect higher productivity. Google responds with a bold claim that 98% of companies have embraced AI—but also admits 70% are tangled in messy data. Meanwhile, ChatGPT’s weekly active users have soared past 800 million, GPT-4o is redefining multimodal rules, and tools like Copilot, Gemini, Claude, and Grok keep leapfrogging each other almost weekly. Perplexity is steadily chipping away at Google’s search dominance, which recently dipped below 90%. AI agents are no longer just assistants—they’re writing code, generating websites and applications, and managing projects. And Google I/O 2025 unleashed a firehose of updates: AI Mode Search, Veo 3 video, and Project Astra’s real-time vision. The takeaway? Digital debt is ballooning, AI capabilities are soaring, and the only smart move is rapid, deliberate upskilling—ideally before the next model drop shakes things up again.
Let’s get stuck in
If you missed my session at Digital Hub Yeovil on Thursday night (22nd May 2025), or if you were there and want to relive the full experience (because, honestly, we packed in quite a bit), this post is for you.
I’m Scott Quilter, Co-Founder and CAIO of Techosaurus, and I had an absolute blast diving headfirst into the wild world of AI and tech with everyone there. Techosaurus—my wife Ellie and I run it together—is all about ed-tech and consultancy. We help people unlock the power of technology in their businesses and everyday lives, whether that’s through automation, AI, or anything in between.
While I kicked things off and nudged Ellie to give a wave from the back of the room, I had a video rolling in the background from our recent trip to San Francisco, featuring their driverless cars. I had an hour’s worth of content squeezed into 45 minutes, so yeah—I talked fast and with plenty of passion. I told the crowd to wave if they wanted me to slow down (spoiler: they mostly didn’t).
All the slides from the session are linked here, so don’t worry about frantically scribbling notes as you read.
The Capacity Gap: Why We’re All Overwhelmed
I kicked things off by sharing some eye-opening stats. Richard from Digital Hub mentioned a few too, which was a nice touch. These insights come from all over, but one of the key sources we always watch is Microsoft’s Work Trend Index Annual Report 2025. If you’ve been on our training before, you might remember last year’s numbers—well, this year’s are fresh out of the oven, based on surveys from 31,000 people across 31 countries. Microsoft owns LinkedIn and 365, so their data stack is pretty solid.
Interruptions & Meetings Overload
Here’s the headline that hit hard: 80% of employees say they don’t have enough time or energy to do their jobs properly. That’s massive. With AI kicking off everywhere, I expected this number to start dropping. Nope. It’s actually up 12% from last year. Ouch. So what’s going on?

Well, workers are facing a staggering 275 interruptions a day—that’s one every two minutes. Meetings, emails, unexpected requests, all that chaotic noise. Digital debt is a pain in the ass, and it’s only getting worse.
And here’s a wild one: PowerPoint edits spike by 122% in the 10 minutes before a meeting. We’ve all been there—someone calls, “Hey, sorry it’s last minute, but can you totally redo the slides right now?” And sure, you drop everything to make it happen, because what choice do you have?
On top of that, 60% of meetings are ad hoc. That bugs me a lot. You pick up the phone, and bam—six people are already on the call, and you’re trapped in a meeting you never planned for. Should that even be allowed?
Then there’s the dreaded after-hours barrage: 58 messages are sent outside of working hours—every single day. Not 58 chats, but 58 individual messages. It’s relentless.
Leadership Expectations vs Reality
But here’s the kicker: 53% of leaders still expect productivity to rise, even though they know—and admit—that people don’t have the time. So that huge “capacity gap” we’ve talked about before? Microsoft’s shining a spotlight on it, and it’s getting worse.
To recap:
-
“80% of employees and leaders say they lack the time or energy to do their jobs” — up 12% since 2024
-
“Staff face 275 interruptions a day” — one every two minutes
-
“PowerPoint edits jump 122% in the last 10 minutes before meetings”
-
“60% of meetings are ad hoc, and 58 messages get sent outside 9-to-5 daily”
-
“53% of leaders insist productivity must rise, despite the capacity gap”
Digital debt is biting hard—and it’s clear we’re all feeling the squeeze.
AI Is Everyone’s Business — Not Just the Young
Now, here’s where AI really comes into play. According to that same Microsoft report, 78% of leaders are actively thinking about creating new AI-focused roles. That’s exactly what Kate was saying earlier—this isn’t a gentle suggestion anymore. We need to change, pivot, and most importantly, train our people. That number is only going up.
But here’s the worrying bit: a lot of C-level and execs still assume you can just hire young folks to handle all the AI stuff for you. That’s flat-out wrong. The leadership team needs to learn AI themselves—you can’t just pass the buck to the “kids” and expect it to magically work. You need to upskill internally, because if you don’t, businesses that do will steam ahead, leaving you in the dust. I’m not here to sugarcoat it.
Microsoft’s stats back this up: 24% of organisations say they’ve applied AI company-wide, while only 12% haven’t rolled out AI yet and are just piloting. That leaves a big chunk—about 64%—somewhere in the middle. Add those 24% and you get 88% AI adoption, which Microsoft calls the “majority phase.” They reckon most companies are now embracing AI in some way or another.
But here’s the reality check. What does “applied AI company-wide” actually mean? Have they really? Or have they just handed everyone a license to Co-pilot and ticked the box? If no one’s been trained, if no one knows how to use AI effectively day to day, or if there’s no plan to keep up with emerging features, then sorry—that’s not adoption. That’s just a purchase order.
I say this with passion because I’ve seen too many businesses fall into that trap.
Oh, and let’s not forget, this is Microsoft talking—they want to sell you Co-pilot. As someone in the audience rightly pointed out, this survey is based on 31,000 people who might mostly be tech companies, so the stats might be a bit skewed. Stats can paint whatever picture you want.
To sum up:
-
“24% of organisations have AI deployed company-wide”
-
“Only 12% have yet to roll out AI, or are just piloting it”
-
“The early-majority phase has begun—those who don’t follow risk falling behind”
The message? AI isn’t a side project or something for interns to handle. It’s a strategic shift, and everyone—especially leadership—needs to be involved.
Frontier Firms: Early AI Adopters Leading the Pack
Now, here’s a fresh term we’re using: “frontier firms”—the new, shinier label for what used to be called early adopters. And the numbers they’re sharing are pretty telling. 71% of people at these frontier firms say their company is thriving, while 55% say they can take on more work. Compare that to the global average—just **37% and 20%**respectively.
What this really means is the early adopters got out of the gate fast, snagged a huge return on investment with AI, and capitalised on an empty landscape. They jumped in before the field got crowded, so they had first dibs on all the low-hanging fruit. Now, the space is packed, competition is fierce, and the pie is smaller for everyone else trying to catch up.
So, when people say, “Don’t miss the AI bus,” well, that bus already left the station. You still need to jump on, for sure, but the biggest slices of opportunity have been taken. The early birds have already gotten the worm—and the lion’s share.
In short:
-
“In frontier firms, 71% of staff say their company is thriving”
-
“55% can take on more work”
-
“Versus 37% and 20% globally—the early adopters are already reaping the gains”
Google’s AI Reality Check: Data Quality Matters
I like to compare stats, so here’s Google’s take. Microsoft’s pushing Co-pilot hard, right? Well, Google dropped their own report recently—“The State of AI”—surveying 500 businesses across eight countries. Smaller sample, but still worth a look.
They say 98% of organisations have already embraced generative AI. Really? That’s 10% higher than Microsoft’s number, which feels huge. But what does “embraced” actually mean here? Can they generate a picture of a cat and call that adoption? What’s the real story? So, take that stat with a grain of salt—Google wants you to buy Gemini, after all.

What feels more solid is that 79% of organisations rate generative AI as critical. This one I buy—tech leaders are talking AI at board meetings, and it’s definitely top of mind.
Now, here’s the big kicker: 70% of organisations struggle with data quality. This really echoes what Kate said earlier, and I love confirming that. AI can only do what your data allows it to. Garbage in, garbage out—that’s the unbreakable rule. If your files are messy, unlabeled, or disorganized, AI won’t work magic for you. It can’t fill in the gaps or make leaps of faith. This data bottleneck is the real hurdle—not the cost.
Then there’s security: 62% of organisations cite security and privacy concerns. With GDPR and EU laws, this isn’t surprising. Keeping data safe is the top challenge in AI adoption. We’re seeing companies pushing back, worried about employees using consumer AI tools on sensitive data. The solution? You need a clear strategy—decide what AI tools are allowed, and enforce that like a dress code. “Yes, you can use AI, but only this one, exactly as agreed.”
Finally, a huge stat: 98% of organisations want support or managed AI services. That’s massive. Lots of businesses want AI but don’t want to handle it themselves. Here’s the gap: Managed Service Providers (MSPs) can sell tools like Co-pilot, but few teach clients how to actually use AI effectively. Many software houses can write code but won’t train your team on AI. This leaves a big hole for consultancies and trainers to fill—and the training gap is growing fast.
To sum it up:
-
“98% of organisations have already embraced generative AI”
-
“79% rate generative AI as critical”
-
“70% struggle with data quality”
-
“62% cite security and privacy concerns”
-
“98% want support or managed AI services”
The takeaway? AI adoption isn’t just about buying tools. It’s about clean data, smart governance, and serious training.
What’s New in the World of Generative AI
So, those are the big-picture stats. Now, let me take you through what’s actually happening in the AI world right now.
ChatGPT: Naming Chaos and New Features
Let’s start with ChatGPT. Things are moving fast. Back in February, they celebrated hitting 100 million new users since December. Then, from February to April, 400 million new active users joined—that’s more than the entire population of the US. Huge. And it’s only going to keep growing—now that the cat’s out of the bag, everyone wants in. This stat is from DemandSage, a fantastic site for spot-on data.

What’s new with ChatGPT? Well, the model names are confusing as hell. Forget “GPT-4,” because GPT-4o is now the default. There are beta models like GPT-4.5 and GPT-4.1 too. Then there’s GPT-o3, now out of beta as a reasoning model, along with o4-mini, o4-mini-high, 4.1-mini and more. Honestly, who can keep track? It’s just a jumble of numbers and an “o” in different places.
Here’s the key: models with an “o” at the start are reasoning models—meaning they pause and think a bit more before answering, asking themselves common-sense questions. Models with an “o” at the end are omnimodal—they can do more than just text, they can see, hear, talk, and even generate images. GPT-4o is one of those cool multitaskers.
Sam Altman, ChatGPT’s CEO, admits it’s confusing and promises that with GPT-5, all this messy naming will go away. Also, if you’re on the free version, expect it to be less capable than the paid £200/month tier—that’s just how it is.
Other AI platforms do this naming chaos too—Claude and Gemini, for example, have models with both names and weird numbers.
One really cool thing: scheduled tasks in ChatGPT. You can set it to do stuff like, “Every morning, find the latest news and email it to me.” This was part of an old model called ChatGPT-4-with-tasks, but this has gone. Don’t worry though, if you use the o4-mini model, you can ask that to schedule tasks—they’ve just moved it there without any warning.
On the image side, ChatGPT’s new image generation tool has seriously stepped up. Previously called DALL·E, it used to be pretty bland, but now it’s on par or even better than big players like Midjourney. How? They quietly trained it with real artists, asking them to create art based on weird prompts people had submitted. This gave the AI a much more creative “brain” rather than just stitching existing images together.

This means you can do things like consistent characters—upload an image of someone and then ask for new scenes with that same character. No more random variations. You can even upload a photo and ask for it to be “put in a blister pack,” like an action figure toy. Wild.
It also nails accurate text rendering—so if you want a poster with readable text, it delivers without the usual AI hieroglyphics. It takes a little longer to generate, but it’s like watching a Polaroid develop—and that’s kind of charming.
You no longer need seed codes or complex tricks. Just upload your image and ask ChatGPT to place it in a new scene. Simple.
Memory-wise, ChatGPT has made some big updates. You can still manually inject memory, but now the context window is so large that everything you’ve ever chatted about (while the chat exists) is fed into the model every time you prompt it. That means your requests are influenced by your entire chat history, your memories, and your custom instructions. That’s why the results I get are different from yours—because the AI is considering everything it knows about me as it generates the answer in a language and pattern that I want.
This makes interacting feel much more personal, like talking to a real person who remembers you. It picks up your style, your voice, and can chat with you on your terms.
If you use projects and folders, don’t worry—ChatGPT still keeps things locked down. One project can’t see another, so there’s no accidental cross-talk between different clients or topics. That’s a smart privacy feature.
They’ve also added source integrations that are a game changer for deep research. If you want a thorough, hours-long report, ChatGPT can now pull info directly from places like GitHub, SharePoint, Dropbox, and Google Drive. SharePoint was a surprise, since it’s Microsoft’s own tool and tightly linked to Copilot. What’s next? Who knows, but it’s turning into one true research tool.
(Turns out that night, after the presentation, ChatGPT announced integration with “Box” as a new partner—keeping up with this is a full-time job!)
Then there’s the “Deep Research” feature. The button looks like a telescope icon—just click it and let it do the heavy lifting. It’ll go away in its own time to search and find what you need, then provide a full article. This could take 3 minutes, or it could take 30. It’ll let you know when it’s done and include full citations.
Gemini, Claude & Grok: The Leapfrog Race
Like I said, these models keep leapfrogging each other. Every time a new feature drops somewhere, everyone else scrambles to catch up and match it.
So, Gemini. I’ll dive deeper into Gemini in a minute, because every time I finish a presentation, they announce something new! The latest is the Gemini 2.5 model, which includes a thinking model—that’s reasoning—and multimodalcapabilities, aka omnimodal. One Gemini can do lots of different things now. You don’t have to load up a separate model for images—you just use the same Gemini and ask for what you want.
Then there’s Claude. They’ve added integrations, meaning it can link to tons of different places. It already hooks into Google Workspace, reading your emails, calendars, and files, giving you a full Copilot-like experience but from a third-party tool inside Google’s ecosystem. They’ve also rolled out a deep research tool—basically the same concept as ChatGPT’s deep research.
(As with all live events, Claude announced their 4.0 model later that evening, so we’ll save that for another day.)
And finally, Grok—the AI from Twitter, or X now. It’s got vision so it can see images, a feature ChatGPT had first. It also supports memory, and a canvas feature, letting you work side-by-side with it like a notepad. (Claude calls theirs Artefacts.) Plus, Grok offers multilingual audio. You see the pattern, right? It’s all just a constant leapfrog game.

Microsoft Copilot: Vision, Prompts & Academy
Microsoft — let’s talk Copilot. They’ve rolled out loads of new features recently. One big one is visual intelligence—basically, vision. It can see and read pictures, and it works really well. Say you’re in a meeting (like we were here), and someone takes a picture of what’s on the whiteboard. You can drop that image into Copilot and say, “Turn this into a Word document for me.” Boom—a fully formatted document with headers and everything. Pretty neat. You can even upload a full transcription and ask it to turn that into a Word doc or PowerPoint presentation.
They’ve also made it possible to upload bigger files now (50mb vs 5mb), so you can really crunch data properly. But my favourite new feature? Saved prompts. If you write a prompt that nails it, just right-click and bookmark it. When you start a new chat, a button brings up all your saved prompts, plus a library of community-voted prompts from other users. Microsoft’s the first to do this, and honestly, it’s brilliant—everyone else should copy it.

Then there’s the Copilot Academy. Alongside the slides we’ll share, there’s a wealth of free online learning resources that guide you through what Copilot can do. It’s really user-friendly.
Enter the Copilot Academy here
All the features I mentioned so far are free with Microsoft Copilot—you don’t need the paid tier for those. The paid version kicks in when Copilot shows up inside Word, PowerPoint, and Excel with deeper integrations.
The paid 365 Copilot now adds audio overviews and podcasts of your files. That’s pretty cool—they’re clearly taking a leaf out of NotebookLM’s book (which we covered before). But NotebookLM still has the edge—you can upload 50 files and websites and get a podcast covering everything. Microsoft’s version is a bit lonelier—you can only right-click one file at a time and ask for a podcast. So, it’s neat but limited unless you’ve got all your content bundled in one place.
More on 365 Copilot later.
NotebookLM: Research & Podcasts Made Easy
So that brings me to NotebookLM—a tool from Google we’ve demoed before. I love it. I use it all the time for client research. Stu, you use it quite a bit too, right? (I asked Stu, “Can you share an example?”) He kindly said, “I’ve used it with a couple of clients. I do sales coaching and business development. I looked at their website and got NotebookLM to generate a podcast so they could hear their business from a fresh perspective—how someone on the outside might see it.” (I thanked him for that, “Sorry to put you on the spot!”)
Seriously though, we’ve been showing learners this exact use case this week. You load in your website, LinkedIn, everything, and ask NotebookLM to make a podcast about your brand. When you listen back, that’s the outside world’s take on your digital presence. If you think, “That’s not what I meant,” you know exactly what to fix to make your messaging clearer.
Imagine going into a client meeting and saying, “Everyone, listen to this podcast.” Sometimes it’s a wake-up call: “Hey, that’s not how we want to come across.” Then your job is to help them sound the way they want. It’s a seriously powerful tool.
NotebookLM also has a new feature called Mind Map, which lets you browse your data visually. When you load your websites and content, it creates a neat mind map so you can explore easily. (I showed this live, saying, “So, in true Blue Peter style, here’s one I made earlier.” My NotebookLM has sources like my personal website, Techosaurus site, info on our AI Skills Bootcamp, and a big text file scraping everything I could find about me and Techosaurus from LinkedIn. Then I hit the Mind Map button, and it popped up a visual map.)
Clicking into Techosaurus shows what we do—our services, talks, training, and more. It breaks down everything without me having to sift through multiple sites. It’s become my go-to research tool.

I even played the AI-generated podcast from my NotebookLM: “Welcome to the Deep Dive. Today, we’re cutting through the AI hype and focusing on something practical—the Generative AI Skills Bootcamp, designed and run by Scott and Ellie Quilter of Techosaurus, in partnership with Yeovil College…” (I stopped it there and said, “You get the idea. It sounds pretty real.”)
The best bit? You can use this on the mobile app now. I listen in the car, and with interactive mode, I can interrupt and ask questions—becoming the third host of the podcast. Absolutely killer.
You can also create podcasts in other languages. Still American English mostly, but not quite British yet. We had a Polish course participant who’s been making podcasts and sharing them with friends, who can’t tell they’re AI-generated. Pretty impressive.
The mobile app launched just a few days ago, which is fantastic. Like Stu, I use NotebookLM to research clients. I gather info from Perplexity, dump it all into NotebookLM, and generate a podcast so I can learn efficiently. The app means I don’t have to download and shuffle files around—I can just play the podcast straight from my phone. Makes life way easier.
Perplexity: The Search Disruptor
Next up is my favourite tool: Perplexity. Who here hasn’t heard of it? (I quickly scanned the room.) Perplexity is being called the future of the internet. Forget Google—you just ask Perplexity anything, and it does the Googling for you. It finds all the sources, compiles everything into one place, and you can have a conversation with your results. Best of all—it’s free and doesn’t even need you to log in. It saves so much time and is a brilliant research tool. And yes, they’re rolling out tons of new stuff right now.
Here’s a fun example of how easy it is to access info nowadays. I showed an image of a woodlouse on the screen and asked the audience to scan a QR code and quietly write down: what’s this insect? (Someone grinned, “That’s an out-of-towner!”) It’s a woodlouse—also called woodlice, Charlie pig, Gramphy Jig, or Billy Baker depending on where you’re from. (Someone shouted, “Cheese log!”)

We had a real mix. Some said Billy Baker, some Charlie pig—it’s a fascinating example of regional dialects. I plugged “What is a Billy Baker?” into Perplexity and got this: “Local Somerset dialect for a woodlouse, particularly around Yeovil. The small, grey, segmented bug.” Perfect! When I first moved to Yeovil, someone called it that and I thought they were joking. The room buzzed—people didn’t know it had so many names, and those who knew it as Billy Baker had no idea the name was unique to Yeovil!
Then we dug deeper—“Why is it called Billy Baker?” The answer? It’s a local linguistic mystery. Apparently, it’s unique to Yeovil. (A few locals were surprised.) The tool showed sources and explored these local names.
We even found out that children worldwide gave woodlice many nicknames because they’re fun to play with—they roll into little balls and don’t bite. So you get names like “Chicky Pig” (Midlands), “Cheesy Bob,” “Monkey Pea,” and many more. (One audience member from the Midlands joked about “Cheese Log.”)

The real magic here is that Perplexity can sift through all this quickly and conversationally. Try doing that on Google—it’d take ages wading through pages and ads.
This year, Google’s search market share dropped below 90% for the first time in over a decade. And that yellow box you see? That’s Perplexity making a splash. It’s disrupting the market and forcing Google to up its game.

Perplexity has partnered with Shopify and PayPal. Soon, in the UK, you’ll be able to search products directly through Perplexity and buy them—no need to visit multiple e-commerce sites. This links to the earlier question Kate raised about whether people will still visit your website if AI is doing it for them. If AI visits your site, how do you make it appeal to AI? We talked about that last time with LEO—now renamed AEO—so it’s still fresh.
They also offer free deep research—one a day with a free account. Upload images for vision searches, use the mobile app with a voice assistant that beats Siri on iPhones, and control your phone with commands like “send an email” or “what’s my calendar?” It’s still early days for phone calls but they’re improving fast.
There’s talk of Perplexity creating their own Android flavour with an always-on AI assistant. In the US, there’s chatter about a 50-50 joint venture with TikTok’s American arm to keep TikTok alive stateside. Apple is also in talks to add Perplexity as a search option in Safari—right alongside Google. That’s how far it’s come.
Here’s a fun trick: add this number, +1 (833) 436-3285, as a contact named “Perplexity” on your phone and message it on WhatsApp. No login, no app needed. You just ask questions, and it replies. They’re even planning to let it join group chats—so if Dave’s talking nonsense, you can invite Perplexity to fact-check him. Dave might not stick around!

ChatGPT has a similar phone setup +1 (800) 242-8478. I put its number on my grandad’s phone—he’s nearly 90 and uses it to complain about my gran. It’s his little chat buddy. He can’t use ChatGPT online, but he knows WhatsApp, so it’s a really low barrier to entry.
The Agentic World: Machines Managing People (and Writing Code Better Than Us!)
What Are AI Agents?
Let’s keep rolling with the idea of agents and pick up where Kate left off earlier. Right now, with ChatGPT, Gemini, Grok, and all the other AIs out there, we have to talk to them to get stuff done. We ask, “Can you make a picture for me? Can you create a video? Can you write some code?” Then they check back in, “Have I done it right?” and we have a conversation.
That’s generative AI.
But agents take it further. You tell an agent, “Can you do this for me?” and it just gets on with it until it thinks the job’s done. It only checks in if it really needs to. It’s a task completer that works without needing constant instructions or approval steps. It does its best, independently..
There’s a lot of buzz about this—what people call the agentic world. Microsoft talks about it all the time. It’s a real game-changer, and it puts us at a genuine inflection point right now.
Microsoft’s Agent Initiatives
Let’s kick off with Copilot. If you’ve got a paid 365 Copilot license, you get access to SharePoint agents. That means you can go to a SharePoint site, hit a chat button, and ask it questions like, “What’s our maternity policy?” It won’t just find the document; it’ll pull out the key parts you need.
Microsoft is also rolling out a project manager agent that works with MS Planner to automate status reporting and keep everyone on task—basically machines managing people. There’s also a research and analyst agent that handles deep research and data analysis exhaustively until it’s done, then delivers full outputs.
Then there’s Copilot Studio. If you’re familiar with Power Automate, this will feel similar. Beyond simple automation like “when I put a file in this folder, send an email to Bob,” Copilot Studio lets you say, “When I put a file here, apply my theme, fact-check it, and send me any improvement suggestions.” AI starts doing those higher-level tasks for us. That’s Microsoft’s vision.

AI’s Impact on Programming & Coders
On the coding front, some big names are weighing in. Jensen Huang, NVIDIA’s CEO, said software AI will soon handle most software development tasks. Dario Amodei from Anthropic (makers of Claude) predicted AI would write 90% of all code within three to six months—maybe optimistic, but certainly forward-thinking. Satya Nadella recently shared that 20–30% of the code in Microsoft’s repos was written by AI just last month. AI is already writing code for the heavy hitters.

It’s reminiscent of the ’90s when chess champion Kasparov lost to Deep Blue, forcing humanity to accept that computers were better at chess. Chess is still played everywhere and even on TV, but the truth is clear. Last month, ChatGPT 04-mini smashed all human records for writing code—code is either right or wrong, and AI now understands it well.
Agents can write code, manage repositories, push updates live, and pull changes back. This isn’t future talk—it’s happening now. Google now offers a direct link from their tools into your GitHub repository. You can generate, modify, debug, and analyse entire codebases, not just snippets.

For those unfamiliar, GitHub is like OneDrive or Dropbox for code—it stores all the raw files that make apps or websites work. Developers collaborate on it, manage versions, and share whole repositories where they write code behind the scenes, which you never see, then compile it into a user-friendly interface. There are coders and IT pros here who know this well.
ChatGPT recently (on 16th May, less than a week before this presentation) launched Codex, connecting directly to your GitHub repos. I tried it myself—pointed it at our website and asked it to optimize our code. It scanned everything, returned recommendations, asked if I wanted it to apply them, and then pushed the changes live. I approved merges and deployment—all done automatically.

Codex can automate coding tasks, bug fixes, text generation, and execute parallel tasks. It fully integrates with GitHub and provides Q&A on your entire codebase. It can even write documentation—something coders notoriously skip—making it easier for others to understand and maintain code.
Smart debugging, transparent actions, custom workflows, and committable results—this stuff is here today. In under 30 minutes, Codex can take a sandbox branch through review and deploy it live.
Meet Credzilla: AI in Action
Now I want to show you something real-world. What does this mean if you’re not a techie? So, Manus—I introduced you to Manus a couple of months ago. I mentioned they were handing out beta trial invites, and people were selling them online for thousands of pounds. I thought, “Well, if I get an invite, I don’t really know what I’ll do with it…” Well, I got an invite, and I didn’t sell it—I used it. And honestly, I hated it at first! I couldn’t get it to do what I wanted. But I stuck with it, and now it’s made something for me that’s actually live in the real world.
Here was my prompt: “I want you to write a PHP page that scripts details from a JSON file to present user credentials someone has earned. I want you to replicate the Credly website.” (For those who don’t know, Credly is where you get digital badges for certificates and awards.) I spoke to Credly about using it for our Skills Bootcamp. They quoted me nearly £2K for the first year for 100 badges. I thought, “I could write my own—it might take a few days.” Then I wondered, “Could Manus do it?” So I asked, and it did—in an hour! The JSON would have the person’s name, the visitor’s credential ID, and I asked for PHP and JSON files.

(I then showed the process.) I entered the prompt—politely, with a “Good morning,” because I want AI to remember that I was polite and courteous when it takes over! Manus went to work. It created a to-do file like a real person would, ticking off tasks as it executed commands. It spun up a cloud server, launched a browser window, tested the website to see if it worked, and then said, “I’m done—here are your files.” I reviewed the files because I know code, put them on my website, and tested—“Yep, that works!” Change the ID from 1 to 2, and it updates instantly. Proof of concept: nailed.
Next, I told Manus, “Can you turn this into a website for me?” and gave it more instructions: “Make it look professional, add CSS theming, placeholder images, list courses, etc.” I gave it a link to Word docs about our courses and our website link, and Manus got cracking. It browsed my website like a human—no reading code, just clicking through. It spun up Python scripts, pulled down files, updated its to-do list as it found more, generated placeholder images (I didn’t make those), updated credentials, wrote styles, and even error-checked itself. When finished, it handed over everything with instructions on how to run it. I hosted it and tested—perfect.

Then I styled it myself: added yellow highlights, polished badges with graphic design, and now everyone who’s been on our AI Skills Bootcamp has their badge on LinkedIn with a neat “Add to profile” button that automates everything—sharing on social media included. I did about 10% of that; the agent did the rest! (Someone from the audience asked, “Did you come up with the name Credzilla?” I proudly said, “Yes, yes!” and got a laugh.)
So, agents are here to stay—and what you’ve seen is probably the worst version you’ll ever see. They’re only going to get better. Developers now have to accept that agents will write code better than us. The question is, how do we manage and use this? How many agents can one person manage? If someone manages ten agents, their output multiplies massively. Really makes you wonder: with code cracked, where do we go next?
Google I/O 2025: What You Need to Know
I wrapped up with a rapid-fire rundown because, annoyingly, Google dropped a ton of announcements after I had mad my presentation so I had to add them right at the last minute, here I cover them all. So, I’ll be quick.
Perplexity is coming for Google, and two months ago Google asked, “How do we fix this?” Their answer: AI Mode for Search. They tested it on a small scale in the US, and at Google I/O on Tuesday, they announced it’s live—still only in America for now. Soon, when you Google something in the US (and we’ll get it soon here), you’ll get a second tab to flick between the usual search results and Google’s version of Perplexity. You can chat back and forth with Google in that tab.
Gemini and Gemini Live—Google Assistant as we know it is gone. It’s being replaced by Gemini.
Project Astra is a visual agent of Gemini. Imagine walking around your room with the camera on, and Gemini is already Googling things for you in real time. You pass a fridge and ask, “Where did that come from?”—it already knows. Think Robocop mode, scanning the world.
Imogen 4 is their image generator.
Veo 3 is killer! So, AI video creation has come a long way. We used to get some janky, scary videos, but now 1 in 10 or 1 in 20 are genuinely impressive. At a recent Digital Somerset event, I saw a digital avatar appear in person. Veo 3 is the first AI video renderer to produce photorealistic video with synchronized audio—people talking, conversing, everything in sync. It looks as good as it sounds. I’ll share videos on social soon.

Flow is an AI video editor that does all the heavy lifting for you. Current tools make you clip and stitch manually; Flow automates that.
Android XR is like Android but for VR. It lets anyone build VR systems easily with a universal OS—no fuss.
Google Beam is fascinating. HP is the first to implement it: a hologram of the person you’re meeting with appears virtually, live, and you can look around it. Real tech, real now.
Optiplexit is Google’s AI shopping tool—they’re integrating AI deeper into shopping.
AI Subscriptions—meet Ultra at $250/month. Want Veo 3? You pay for it, but only in America for now. AI will do your job, but it’s premium. You won’t get this level of AI for $20/month. The question is, “If AI can replace 10 people, how much is that worth to you?” Do you go digital or keep real people?
Google Meet AI Translate just went live. You talk English in a meeting, and the other person hears you speaking their own language—live—with your mouth movements synced. Mind-blowing.
And finally, Project Marina is autonomous web agents coming to Chrome. You’ll literally tell Chrome to browse the internet, book your tickets, finish tasks—all hands-free. Just say the word and walk away.
My Current 3 x Favourite Tools
I’ll wrap up there! But before I go, I like to share the three apps I’m currently loving—the ones that are hot in my world right now.
-
Dinopass: I’d actually forgotten this one existed! We had someone on our cohort show it, and Ellie said, “Hey, I saw this really cool, friendly password generator.” Now, most folks use LastPass or Bitwarden, but if you want a quick way to create strong passwords without those, check out dinopass.com. It’s super flexible—you can generate passwords with a set number of words, dashes, numbers, whatever you want. It’s a neat little tool.
-
Perplexity Mobile App: I’ve talked a lot about Perplexity already, but their mobile app is absolutely killer. I use it all the time, just chatting with my phone and getting things done. I leaned on it heavily when I was in the States—just ask and it handles it for you.
-
And finally, Obsidian. I use this all the time for note-taking and linking ideas together. It even has AI built in now, which makes it next level.

Those are the three I’m rocking right now, and I’ll share more next time.
Ready to Level Up Your AI Skills?
Our Generative AI for Business Skills Bootcamp is now open for registration! We’re running it through Yeovil College, and it’s free if you’re self-employed. For small businesses (under 250 people), it’s government-funded at just £399, and for larger companies, it’s just over a grand. This is a 10-week course, with new cohorts starting as soon as July!
We’ve also expanded into Dorset and Wiltshire, and are in talks with other areas too. Interest is strong already, so get your name down early. Upcoming start dates include July, September, November, and January.

Find out more and register your interest here
Phew! That was a whirlwind, but I hope this recap gives you a solid sense of everything we covered. AI is moving fast, and staying informed is crucial.
You can find all the slides from the session linked here with this article for a deeper visual dive!
The information presented in this blog post is based on insights from the Microsoft and LinkedIn 2025 Work Trend Index Annual Report, the 2025 State of AI Annual Report, and various other industry updates as of May 2025. Please note that AI developments are rapid, and information may evolve quickly.