Adopting AI Without the Drama: Straight-Talk Lessons from the Exeter Panel

Friday morning at Exeter Science Park had the right kind of buzz. Coffee, name badges, and a room full of people who wanted straight answers. Barclays Eagle Labs hosted a practical, no-nonsense panel on “How can AI revolutionise your business?” and the brief landed well: share what actually works, what to avoid, and how to sequence adoption so SMEs get real value without creating tomorrow’s tech debt.

Event: How can AI revolutionise your business?
When: Friday 19 September, 10:00–12:30 (GMT+1)
Where: Exeter Science Park

In the room:

  • Host: Lucy Bennett, Ecosystem Manager, Barclays Eagle Labs

  • Panel: Colin Dart, Exeter University Innovation Business Acceleration Programme Manager, and TechExeter co-lead; Luke Williams, Group Head of AI, Intergage Group; Andrew Olowude, Co-founder, Xenet; Thomas Gamblin, Founder and AI strategist, The Scalable Creative; and me, Scott Quilter, Co-founder, Techosaurus.

  • Audience mix: SME owners and operators, developers and tech leads, marketers and comms, and a healthy number of advisors who get pulled in when things break.

Below is a full write-up, built around the four big audience questions, followed by an operating playbook you can lift straight into your business, and the five panel-backed tips we closed with.


Topic: Hallucinations, checks, and specialist agents

A real scenario from the room: the model wrote a clean client email with a Google Maps link and a What3Words link. It sounded confident. The What3Words, however, pointed somewhere completely different.

What happened, in plain English
Large language models generate fluent text. They are excellent at “what looks right,” not guaranteed facts. When a request mixes several steps, the model can paper over uncertainty with confident wording. That is a hallucination.

The fix is process, not hope
Stop asking the model to do everything in one go. Break it into steps and check at each hop.

  1. Ask it to extract the address.

  2. Ask it to verify the coordinates or postcode.

  3. Ask it to generate the links.

  4. Assemble the email last.

If it is a repeated job, consider a small specialist agent rather than one do-everything chat. A tiny address or What3Words validator that only does that one task will beat a general model trying to juggle five things at once. Five small agents beats one “everything” bot more often than not.

Keep HI + AI
Treat the model like a keen junior: give clear instructions, ask for evidence, and review the output. A simple line like “Do not invent facts. If uncertain, say ‘uncertain’ and ask me to confirm” is surprisingly effective. Add “Validate What3Words against the postcode before inserting the link” and you will catch the classic misfires.

Context kills guesswork
Upload your service areas, product lists, policies, and two or three good examples. Tell the model to stay inside that context. You will get far fewer “Alaska moments” when it knows you do not operate in Alaska.

Jargon buster

  • Hallucination — a confident-sounding but incorrect answer.

  • Agent — a small, task-specific assistant with instructions, tools, and guardrails baked in.


2) “Where do we actually start, and in what order?”

Topic: Crawl → Walk → Run; use-case first; consistency

Choice overload is real. The answer we kept circling back to was simple.

Crawl
Pick one annoying, repeatable task you know well, like meeting notes with action lists, invoicing chasers, or extracting line items from PDFs. Pilot one step. Write a tiny checklist. Share what worked.

Walk
Chain adjacent steps once the checklist is stable. Move the checklist into a Copilot agent or a lightweight custom GPT so the rules travel with the task. Add context packs, like your strategy doc, brand voice guidance, and two strong examples.

Run
Only when the chained steps are consistent should you redesign the whole process around the new capability. Keep a human at the decision points where judgement, compliance, or brand risk bites.

Make consistency boring, then powerful

  • Publish a one-page AI Statement. Say where AI is allowed, where humans must sign off, and what checks apply. This calms nerves, helps hiring and procurement, and removes ambiguity.

  • Run a 15-step content check before anything leaves the building: accuracy, sources, brand tone, UK English, risk flags, and sign-off.

  • Provide context by default. Attach strategy, policy, examples, and “things we never say.”

  • Use agents for repeatability. Copilot or custom agents can carry your context, security notes, and style guide across the team on a pay-as-you-go basis.

A helpful gut-check
If you need the same output every time, either standardise prompts and context ruthlessly, or buy a productised service with service levels. The real question is not just “can we,” but “should we.”

Jargon buster

  • Copilot agent — a configurable assistant in Microsoft 365 that can use your SharePoint and Teams content, respect permissions, and execute scoped workflows.

3) “Looking at models, wrappers, and apps, will the big model companies eat everyone’s lunch?”

Topic: Where value accrues; pricing; open-source pressure

Short answer: the big vendors mostly sell tokens and infrastructure. Their business works best when thousands of us build on top, so they usually prefer being the platform rather than owning every end-user tool.

What that means for SMEs and tool-makers

  • Expect pricing tiers and enterprise deals at the top of the market.

  • Specialists will keep winning on workflow, UX, compliance, and domain data.

  • Your durable edge is know-how, relationships, and the data that makes your outcomes better, not a thin UI over someone else’s API.

Open-source keeps things honest
Good local models exist and will keep getting better. That competitive pressure helps on price and gives you private deployment options when privacy or latency matters.

Jargon buster

  • Token — the metered unit models charge for when processing text. Think electricity, not a flat fee.

5) How to choose tools without chasing hype

Task–fit first, resist churn, then standardise

Models are in constant flux. One week your feed says “X is unbeatable,” the next week “Y just crushed the benchmarks.” That’s background noise. If the tool you use already fits the job, feels natural, and gives you reliable output, stick with it. Do not jump ship for a shiny feature you might use twice. That’s how subscription creep starts, and you end up paying for overlap you do not need.

Think like a builder choosing tools. If two builders open their vans, one has DeWalt, the other Makita. Both can build the same extension. Preference, feel, and habit decide what they reach for. Your AI stack is similar. Pick what you are comfortable using, what your team enjoys, and what delivers the right output for your work. Then standardise.

A simple policy that keeps you honest

  • Pick the best tool for the task you actually do, not the one with the noisiest launch.

  • Standardise on a default model and front end for each job type, write it down, and make it easy to follow.

  • Review quarterly, not weekly. Change only when a new option gives a clear, measured win on your tasks.

How to avoid subscription creep

  • Map your jobs to tools, not the other way around. If two products serve the same job, keep one.

  • Run a one–week bake off before buying anything new. Same prompts, same data, side by side. If the outputs are equivalent, do nothing.

  • Track total cost of change. Switching costs include retraining, prompt refits, agent rebuilds, and security reviews. If those costs outweigh the gains, park the switch.

When switching is actually worth it

  • Consistent accuracy lift on your real workloads, not on internet leaderboards.

  • Material speed gains that change your cycle time, not a few seconds here or there.

  • Compliance or privacy benefits you can point to in policy and audits.

  • A feature that unlocks a whole use case you could not do before, with a plan to decommission the overlap.

Keep a steady front end for research

  • Use a search–first tool like Perplexity when work starts with sources. It lets you try different models in one place, then you can still standardise on the one your team prefers.

  • If you do adopt a Pro plan, set a calendar reminder to reassess before renewal so you do not stack forgotten subscriptions.

Checklist to keep you out of the hype slipstream

  1. Write the task you care about in one sentence.

  2. Test it across 2 or 3 models, same inputs, same review checklist.

  3. Choose the steady performer your team likes using.

  4. Lock it as the default, document prompts and context, and move on.

  5. Revisit in 90 days, not tomorrow, unless a clear breakage or requirement forces it.

Bottom line: standardise on what works, resist novelty for novelty’s sake, and measure changes on your work, not on someone else’s leaderboard. Your goal is dependable outcomes and a clean stack, not a museum of half–used subscriptions.


6) What about security and data policy?

Use business-grade tools for business tasks, read the terms, add light guardrails

If the work is sensitive, keep it on business-grade tools. On the panel we called out Microsoft Copilot “agents” for exactly this reason: they let you preload context and risk info, and crucially, they require a higher security level to build and can be deployed pay-as-you-go across the org so you are not paying when people are not using them .

We also flagged a clear caution around DeepSeek: running it locally on your own machine is fine; using the website or app is not a good idea for confidential work because of data-retention terms. Separately, Perplexity runs its own DeepSeek variant in the US, which is another reason to be explicit about where your data goes .

As a simple rule, treat “free” web apps as public by default. If you would not put it on a postcard, do not paste it into a consumer endpoint. Use local installs or enterprise options for anything sensitive, and keep your eyes on the small print before you pilot new tools .

A little friction helps, too. Bake guardrails into your prompts and workflows: tell the model not to hallucinate, keep creativity low for compliance-type tasks, and require it to flag uncertainty. Teams on the panel also run a short content check before anything leaves the building, which catches accuracy, tone, and risk issues early .

Bottom line: business tools for business tasks; prefer local or enterprise-grade for sensitive work; avoid consumer “free” endpoints for anything confidential; and add light guardrails so outputs stay inside your lines .


7) Skills and culture: build capability from the inside

Jobs are shifting, and the smart move is to grow capability where you already have trust and context. The panel kept coming back to the same pattern: find the people who are already hands-on, make them visible, and give the rest of the team a simple, safe way to learn in public.

Start with the champions you already have.
Plenty of staff use AI at home, even if work has not caught up yet. Look inward, ask who is using what in their day-to-day life, and nurture those habits into team practice. Consultants will look for these advocates when they come in, so identify them early and give them a mandate to share what works. 

Use reverse mentoring to spread fluency.
A practical example from the room: graduates trained senior staff on a new platform, with light steering from managers. It worked because the juniors had native comfort with the tools, and seniors brought the business judgement. That pairing accelerates learning without risking quality. 

Make learning safe, simple, and a bit playful.
When teaching groups of SMEs we start with simple day to day tasks. These are low risk, it shows what the model can and cannot do, and it gets people used to delegating in clear steps. Managers often take to this faster, because they already give structured instructions to people. The lesson is the same for everyone: treat the model like a worker, be explicit about what to do and what not to do, and break work into small pieces you can check. 

Explain that people are ahead of businesses right now.
Another theme: adoption is uneven. Many individuals are comfortable with AI at home, while their employers are still figuring out policy, budget, and priorities. That gap is an opportunity. If you do not harness those existing skills, they will arrive anyway, just in an unmanaged way. 

Publish a one-page AI Statement to stabilise culture.
A simple statement, co-written by a small cross-level group, clarifies where AI is encouraged, where humans must sign off, and what is out of bounds. It also helps with retention and hiring. One panellist said a candidate applied because the company had an AI statement on its website, and procurement teams now ask for it during due diligence. Ring-fence the human work you value, and people will see a future for their role. 

Be honest about usage patterns.
Stats discussed on the day suggested business use is still a minority compared with personal use. That is a signal to formalise what already works for people, then pull it into team workflows with light guardrails. 

What to do next, in plain steps

  • Ask your team, privately and then in the open, where they already use AI at home. Invite three quick demos at the next all-hands. 

  • Pair a junior “power user” with a senior owner of a real process, and run a two-week reverse-mentoring sprint. Capture what transfers. 

  • Run the day to day tasks as a warm-up, then switch to one real task, chunked into checks. Keep the tone practical, not performative. 

  • Draft your one-page AI Statement with a cross-level group, and publish it internally first. It calms nerves, helps hiring, and answers procurement early. 

Bottom line: the capability you need is already walking around your building. Put it to work with reverse mentoring, small safe exercises, and a one-page statement that says what good looks like. Then turn those habits into standard practice, with human judgement kept where it matters.


Your operating playbook, ready to lift

This week — Crawl

  1. Pick one workflow you own and pilot a single step.

  2. Draft a one-page AI Statement and share it internally.

  3. Write a checklist for that task: accuracy, sources, tone, privacy, and sign-off.

  4. Trial two tools on the same job, compare, and standardise on the winner.

  5. Share one win at the next team meeting, and add one curiosity question to the agenda.

Next month — Walk

  • Chain two steps end-to-end.

  • Move your checklist into a Copilot or custom agent so the rules travel with the task.

  • Add context packs: strategy, brand voice, templates, and two good examples.

Quarter — Run

  • Redesign the process around what now works.

  • Add specialist agents for validation, formatting, and compliance checks.

  • Formalise reverse mentoring, and track cycle-time and error-rate improvements.


Reference: a 15-step content check you can adapt

  1. Purpose and audience clear

  2. Facts verified, names and dates correct

  3. Sources cited or linked

  4. No speculation presented as fact

  5. Sensitive data removed or masked

  6. Jargon explained or avoided

  7. Tone on brand; plain English

  8. UK spelling and usage

  9. Claims match capability, no over-promising

  10. Legal, compliance, and sector rules respected

  11. Accessibility basics, clear headings and alt text if needed

  12. Style consistency, units and capitalisation

  13. Prompt or agent version recorded

  14. Human reviewer sign-off recorded

  15. Final read-aloud pass to catch clunky sentences


Quick jargon guide

  • LLM — the language engine behind tools like ChatGPT.

  • Token — the metered billing unit models charge for when processing text.

  • Agent — a small, task-specific assistant with instructions and tools.

  • Guardrails — constraints you set in prompts or tooling to limit behaviour.

  • Context pack — the docs and examples you attach so answers live in your world.

  • Tech debt — future pain from quick fixes today. Avoid by piloting, standardising, and hardening.


Five panel-backed tips to get moving, thoughtfully

  1. Start small, then scale, and always keep a human in the loop
    Pick one repeatable, low-risk task, learn how AI helps, then chain wins together. Treat the model like a junior helper, not an autopilot.

  2. Publish a one-page AI Statement
    Set out how you will and will not use AI, where humans must stay involved, and what checks you expect. It helps with culture, hiring, and procurement, and it removes ambiguity across teams.

  3. Build guardrails, add context, and check outputs
    Break big requests into steps, give the model your documents for context, and run a consistent review checklist before anything goes out the door. Custom agents or Copilot agents can embed that context and policy for consistency.

  4. Choose tools by task, not hype, and mind data policy
    Different models shine at different jobs, so test against your real tasks. Explore search-first tools like Perplexity, try local or open-source where appropriate, and be cautious with hosted apps that retain data. If you run DeepSeek locally, fine; avoid the web or app versions for sensitive work.

  5. Grow capability from the inside, then bring in help where it counts
    Spot the people already using AI in their personal lives, make them champions, and try reverse mentoring. If you need acceleration, lean on specialist consultants or sector programmes to map use cases and de-risk adoption.


Final thought
Do not rush, and do not freeze. Start small, then scale, and keep a human at the decision points. Publish your AI Statement, use a checklist, and choose tools by fit rather than fashion. That is how you get real value quickly without creating tomorrow’s tech debt.