Human in the Loop, and the Ethics of Decision-Making
This is The AI Advent Series, a five-week run of practical reflections on how AI is actually being used day to day. Each piece looks at one theme that keeps coming up in my work, in our bootcamps, and in real conversations with people trying to make sense of this technology.
Introduction
One pattern that keeps resurfacing — whether in training sessions, consultancy work, or interviews like the one I gave this week — is the assumption that AI can, or should, make decisions for us. People seem relieved at the idea of handing decisions over to a machine, as if removing the human element will somehow make everything cleaner, faster, or more objective.
It’s tempting, especially when you see how quickly AI can process information. Though that temptation is exactly where things start to go wrong.
Because here’s the part most often overlooked: AI cannot carry responsibility. Only humans can. And responsibility is the anchor of ethical decision-making.
If that anchor disappears, the consequences can be enormous.
AI Output Is Never the Final Answer
Whenever we teach AI literacy, we make this point clear from day one: AI’s output is never the finished product. At best, it’s 80 percent.
That last 20 percent — the part that requires judgement, checking, interpretation, and context — is where all the human value lives.
Anyone can copy and paste an AI response and move on, though that isn’t intelligent use of the tool. It’s laziness dressed up as efficiency. And when the stakes are high, that laziness becomes dangerous.
A machine cannot:
- Understand the nuance of a situation
- Weigh human impact
- Consider ethics
- Detect hidden risk
- Challenge itself
- Defend its decision
Only a person can do that.
Which is why the “human in the loop” isn’t optional. It’s essential.
The Deloitte Example: What Happens When You Remove the Human
One of the clearest examples of this went viral earlier this year.
A major consultancy produced a detailed report for the Australian government about their welfare system. On the surface, it looked legitimate — formatted well, well-structured, and delivered with confidence.
Except it was almost entirely wrong.
It included:
- Fake judges’ names
- Fake cases
- Fake statistics
- Fake legal references
Someone had pushed the entire brief through an AI tool, assumed the output was correct, and sent it on for publication. No fact-checking. No verification. No ownership.
The consequences were huge, and it became a public scandal.
This wasn’t an AI failure. It was a human failure — fuelled by misplaced trust in a tool that was never meant to replace decision-makers.
When the human step is removed, integrity goes with it.
Ethical Decisions Must Stay With Humans
There are certain areas where human involvement isn’t just recommended, it’s non-negotiable. These are the spaces where decisions affect people directly:
- Hiring
- Firing
- Healthcare
- Finance
- Legal rulings
- Welfare decisions
- Safety-critical environments
- Anything involving personal data
AI can surface patterns, summarise information, or highlight anomalies, but it cannot decide. It has no concept of fairness, harm, or proportionality. It cannot feel the weight of a choice.
And when money, wellbeing, or safety is on the line, that weight matters.
The moment a machine takes that responsibility is the moment accountability disappears.
The Illusion of Objectivity
A lot of leaders assume that involving AI makes decisions more “objective”. They imagine removing human bias by swapping it for machine logic.
But the truth is simpler and more uncomfortable:
AI is only ever as unbiased as the data it was trained on. Which means bias doesn’t disappear — it becomes harder to spot.
Keeping a human in the loop isn’t just about ethics. It’s about visibility. If something looks questionable, a person can challenge it. AI cannot challenge itself.
Ethical decision-making relies on that loop. Without it, bad decisions look clean, even when they’re harmful.
Why Transparency Matters More Than Speed
We live in a culture obsessed with speed. Faster decisions, faster responses, faster output. AI fits perfectly into that mindset, though faster isn’t always better.
Ethical decisions need:
- Time
- Context
- Nuance
- Consideration
- Responsibility
- Transparency
AI can help with the first part — gathering the data — but only humans can deliver the last five.
A transparent process shows:
- Who made the decision
- Why they made it
- What informed it
- What alternatives were considered
- Who holds accountability
AI cannot provide any of that clarity. It can only provide content.
Designing Systems That Keep Humans at the Centre
What we teach in our bootcamps is that you should never blindly hand a decision to a machine. Instead, you design the workflow intentionally:
AI handles:
- Drafts
- Data
- Summaries
- Analysis
- Pattern detection
- Recommendations
The human handles:
- Verification
- Judgement
- Ethics
- Interpretation
- Responsibility
- Final decision
That structure keeps the benefits of AI — speed, scale, consistency — while retaining the human layers that protect fairness and quality.
It’s not about replacing judgement. It’s about augmenting it.
Where People Get This Wrong
Most of the problems I see come from three behaviours:
1. Blind trust. Treating AI output as fact.
2. Overconfidence. Assuming AI will “do the thinking”.
3. Abdicating responsibility. Blaming the system when things go wrong.
None of these are technical issues. They’re cultural issues. They show why ethical literacy — not technical skill — is going to matter most over the next few years.
Why Human in the Loop Isn’t Just Ethical, It’s Necessary
After teaching more than a hundred learners, across every sector you can imagine, this has become abundantly clear:
AI extends human capability, but it cannot replace human responsibility.
People often underestimate the value of judgement until they see what happens when it’s removed. And once you understand that, the ethical boundaries become obvious.
Humans should make decisions because only humans:
- Understand consequence
- Carry accountability
- Recognise nuance
- Interpret emotion
- Weigh values
- Communicate reasoning
AI supports all of that, but it cannot embody any of it.
Conclusion
As AI becomes more woven into everyday work, the temptation to outsource decisions will grow. But the line is clear. Machines can assist. They can augment. They can accelerate. But they cannot take responsibility.
Keeping a human in the loop isn’t a limitation — it’s the safeguard that keeps the technology useful, fair, and trustworthy.
The future isn’t machine-led decision-making. The future is human judgement, enhanced by intelligent tools.
AI will not replace ethical decision-making. It will simply make the human role in that process more important than ever.