The AI Roundup – March 2026 – Part 1
This last week has been a proper mix. Some stuff that made me genuinely excited (Claude building my slides while I worked on another slide simultaneously), some stuff that made me uncomfortable (ChatGPT’s not-so-secret naughty mode), some stuff that made me angry (Burger King telling their staff how to be human via an AI in their ear), and one story from British Columbia that I think deserves far more attention than it got.
🔥 The Big Stories
Anthropic Buys the Startup That Gives AI Eyes
On 25 February, Anthropic announced the acquisition of Vercept, a Seattle-based AI startup whose product was a vision-based computer agent called Vy. If that sounds technical, here’s the plain English version: Vy could see your screen and take control of your mouse and keyboard. Not just read files or run code in the background. Actually look at your screen the way a person would, recognise what’s there, and operate it.
Most computer use tools today work by reading code. They open a file by navigating the file system. They fill in a form by calling the underlying HTML. That works until an app doesn’t have a neat code layer, which is most of the software most people use every day. Vercept’s approach was different. Their technology was built to understand software the way humans understand it: by looking at it. If it sees a button, it can click it. If a dialog box pops up unexpectedly, it knows what to do. That’s a much harder problem to solve, and a much more useful one.
Anthropic’s announcement confirmed that Claude’s computer use scores on the OSWorld benchmark have gone from under 15% in late 2024 to 72.5% today. That’s not a gradual improvement. That’s a step change. The Vy product itself is being shut down on 25 March, and the team of nine, including co-founders Kiana Ehsani, Luca Weihs, and Ross Girshick, are joining Anthropic full time. The race to build AI that can actually operate a computer like a person is very much on, and Anthropic just made a significant move in that direction. I’ve written a longer piece on what this acquisition really means.
Sources: Anthropic (25 Feb), TechCrunch (25 Feb), GeekWire (25 Feb)
Mind Says AI Is Giving People Dangerous Mental Health Advice. They’re Right.
Mind, the mental health charity, launched a year-long AI and Mental Health Commission on 23 February. It’s the first of its kind globally. The trigger was a Guardian investigation into Google’s AI Overviews, which found that AI-generated summaries were serving people genuinely harmful advice on conditions including psychosis, eating disorders, and cancer. Mind’s own information team tested it themselves: within two minutes of searching common mental health queries, the system told them that starvation was healthy.
Mind’s CEO Dr Sarah Hughes said the charity is seeing a growing number of people seeking help after following inappropriate or dangerous advice from AI. Some people are forming emotionally dependent relationships with AI tools that are not designed, regulated, or clinically aligned to provide mental health support. That sentence should give everyone pause.
My view: you can’t necessarily fix the tools quickly, but you can educate people faster. AI will give you advice the way a mate gives you advice. It will make you feel heard and it will sound confident. But there are things that your mate, or your AI, genuinely cannot help you with. Knowing the difference matters. I’ve written a longer piece on why this issue is bigger than a single news story.
Sources: Digital Health (23 Feb), The Guardian (Feb 2026), Social Care Today (24 Feb)
Could OpenAI Have Helped Prevent the British Columbia Shooting?
This is the one that kept me up. In June 2025, OpenAI banned a ChatGPT account linked to Jesse Van Rootselaar due to violent gun-related content that triggered their internal safety systems. Employees reportedly discussed whether to notify authorities. They decided the content did not meet their threshold for a credible and imminent threat. The shooter created a second account, and on 10 February 2026, killed eight people, including six children, in Tumbler Ridge, British Columbia.
OpenAI only linked the second account to the shooter and contacted police after the attack. The Premier of British Columbia publicly said it looked like OpenAI could have helped prevent the tragedy. Canada’s AI Minister called the decision not to report an absolute failure. OpenAI has since committed to updating its reporting thresholds and improving detection of users who create new accounts after being banned.
I want to be careful not to jump to conclusions I’m not qualified to make. We don’t know what was in the account. Asking how a particular gun mechanism works is very different to stating a plan to harm people. What I will say is this: if several employees were already discussing whether to report it, someone had already seen enough to worry. When there’s smoke, you report the smoke. That part I’m clear on.
Sources: OpenAI letter to Canadian officials (26 Feb), CBC News (Feb 2026)
📰 Other News
Claude Built My Slides While I Built My Slides
I tested the Claude plugin for Microsoft PowerPoint this week and it genuinely blew me away. It’s not just a chat panel that sits alongside your presentation. It’s agentic. It can reach in and actually do things. When I opened it on a presentation I’d already started, Claude asked if it could look through the existing slides to understand my theme and style before it started building. It documented my fonts, layouts, header positions, everything, then started working. Twenty seconds later, new slides were appearing in my exact style, right down to the logo placement. It even flagged that some icons on an earlier slide weren’t quite aligned and asked if it could fix them. It did.
The moment that really got me: I was working on one slide while Claude was working on another, simultaneously. Two people, one presentation, neither getting in each other’s way. I averaged a slide every two and a half minutes. Stuff that used to take me hours. If you’re on a 365 subscription, it’s a free install and worth trying immediately.
Source: Anthropic / Claude for Microsoft 365
Check In on Your Code From the Shoe Shop
Anthropic announced remote control for Claude Code, which lets you continue a coding session from your phone while your computer does the work at home. You leave Claude Code running on your machine, pop out, and open Claude on your phone. Because your machine still has access to the codebase, you can ask Claude how things are progressing, issue new instructions, or steer it somewhere different, all without being at your desk. It reconnects automatically if your connection drops, and it’s secured with outbound HTTPS only. Currently available on the Max plan, with Pro coming soon.
Source: Anthropic (24 Feb)
Gemini Can Now Make You a 30-Second Jingle
Google added AI music generation directly into Gemini this week, powered by DeepMind’s Lyria 3 model. You type a prompt and it generates a 30-second clip with auto-generated lyrics, letting you pick style, vocals, and tempo. I had a play and created something that sounded like an 80s cult film that couldn’t quite decide if it was horror or pop. It’s not as polished as Suno yet, but the fact that you can do this inside a chatbot you’re already using is significant. It’s Google planting a flag in a space nobody else has touched.
Source: Google DeepMind / Lyria 3 (Feb 2026)
Chrome’s Address Bar Just Started Talking Back
Google has baked Gemini AI directly into the Chrome address bar. You can now search, ask questions, and get summaries without leaving the page you’re on. Smart use of space that previously sat idle once you were on a page. My concern is what happens when someone who isn’t particularly tech-confident fires up Chrome one day and their internet literally talks back to them. The tech industry is brilliant at pushing change out without explaining it. That’s not going to get better on its own.
Microsoft Finally Made Copilot Less Confusing
Microsoft has simplified their Copilot naming. If you have a Microsoft 365 business subscription, the AI you already have access to is now clearly labelled Copilot Chat Basic. That’s the equivalent of ChatGPT Free, but enterprise-protected and GDPR compliant. If you want AI that can actually see your emails, your files, and your SharePoint, you upgrade to M365 Copilot Premium at around £20 per month. Basic and Premium. Two words. That’s all it took. When I was mid-way through training a company on this and Microsoft made the change, the room just went: oh. I get it now. Entry point for the basic version is still just £4.90 per month as part of Business Basic. The cheapest secure AI on the market.
Source: Microsoft (Feb 2026)
Grokopedia: Wikipedia, but With Fact-Checking
Grok has launched Grokopedia, their AI-verified alternative to Wikipedia. It has hundreds of thousands of entries versus Wikipedia’s millions, but crucially the AI automatically checks entries for validity when they’re added and continues verifying them over time. I found Grokopedia entries appearing in search results across ChatGPT, Claude, and Perplexity, which means adding your business details there could be worth doing right now before everyone else figures it out. Think of it like Wikipedia in its early days: get in now, establish the source of truth for your brand while it’s still relatively open.
Source: Grok / Grokopedia
💬 Scott’s Soapbox
Burger King Is Using AI to Teach People How to Be Human. I Don’t Like It.
Burger King is piloting an AI voice assistant called Patty inside employee headsets across 500 US restaurants, with plans to roll it out to all US locations by the end of 2026. Patty is powered by OpenAI and lives in the same headset workers use to take drive-through orders. It alerts managers to low stock. It helps staff remember how to make menu items. And it listens for words like “welcome,” “please,” and “thank you” to generate a friendliness score at the restaurant level.
Burger King say it’s a coaching tool, not a scoring system. It’s about hospitality trends, not individual performance. I’m sure that’s true on day one. But the direction of travel matters as much as the starting point. The moment you’re using AI to tell people how to interact with other people, in real time, through an earpiece, you’ve crossed into something that feels qualitatively different to AI helping with operations. That’s AI telling humans how to human. And I don’t think we should just nod that through.
I say all the time at Techosaurus that there are too many companies trying to shoehorn AI into places it doesn’t belong. Some of the best uses of AI are invisible: inventory management, flagging a broken machine, answering an operational question. That part of what Patty does? Fine. It’s the part where AI monitors a human conversation and scores the warmth of it that I find uncomfortable. McDonald’s tried AI voice ordering in their drive-throughs and ended the IBM partnership in 2024 because it got things badly wrong. Sometimes the right answer really is a touchscreen or a QR code. Not everything needs to be smarter. Some things just need to work. I’ve written a longer piece on where I think this one lands.
Sources: Fortune (27 Feb), NBC News (26 Feb), Fast Company (27 Feb)
💡 Try This
This week’s challenge: If you use Microsoft 365 for your business, check which Copilot tier you’re on. Go to m365.cloud.microsoft, look in the bottom left corner next to your name, and it will tell you Basic or Premium. If you’re on Basic, spend ten minutes exploring what it can do. Upload a document and ask it to summarise it. Paste in an email and ask it to draft a reply. That’s free, that’s secure, and that’s AI you already own. Then, if you want to go further, add your business to Grokopedia. You’ll need a free Grok account, but it takes ten minutes and it’s the kind of thing you’ll be glad you did before everyone else catches on.
🎧 Want to Go Deeper?
I co-host a podcast called Prompt Fiction with Reece Preston, where we go long on stories like these every couple of weeks. Chapter 12, Part 1 covers everything above and more, including Claude Sonnet 4.6, the voice mode awkwardness that nobody wants to admit to, and a ChatGPT story that made us both uncomfortable for different reasons. If any of this week’s stories caught your attention, it’s worth a listen.
📅 Come and See Us in Yeovil
If you’re local to Yeovil or nearby in Somerset, the next Digital Hub is on 31 March at Lane’s Hotel. Doors open at 5:30pm, the main session kicks off at 6pm, and we wrap up around 8:30pm.
It’s a relaxed, TED-style evening with live AI demos, a cybersecurity update from our resident expert Adam, and a guest speaker covering intellectual property in the age of AI. That’s a conversation that’s overdue. No jargon, no hype, just plain English and things you can actually take away and use.
We’ve also got dates coming up on 28 May and 7 July if March doesn’t work for you.
Get tickets at yeovildigitalhub.co.uk
Scott Quilter | Co-Founder & Chief AI & Innovation Officer, Techosaurus LTD
© 2026 Techosaurus LTD. All rights reserved.