Two AI Companies, Different Approaches to Advertising at the Super Bowl
The Super Bowl has long been a stage for brands to show off their creativity and deep-pocketed ambitions. This year, two major players in the AI world—OpenAI and Google—took centre stage with very different approaches. Below, I break down their strategies, share my thoughts in dedicated sections, and provide direct YouTube embeds so you can watch the adverts.
OpenAI: Flexing Cash and Celebrating Human Creativity
OpenAI made its TV debut with a 60‑second spot that positions its technology alongside humanity’s greatest innovations. Although the company initially used its text‑to‑video tool, Sora, for early concept work, the final advert was crafted entirely by human artists. This deliberate choice reinforces an important point: AI is here to help us unlock creative potential, not to replace the human touch.
OpenAI’s impressive investment is on full display here. With a hefty price tag of around $14 million for the first‑half 60 second placement and after SoftBank’s funding topping over $40 billion (a figure that even dwarfs Microsoft’s investment), OpenAI is clearly flexing its newly earned cash in a very public way. It’s a bold move in a crowded market, signalling that if you’re going to invest in your brand, do it with authenticity and craftsmanship.
Watch OpenAI’s Super Bowl Ad:
My Thoughts:
I love that OpenAI isn’t content to rely solely on AI to generate its final product—even though Sora helped spark the creative process. This choice perfectly echoes what we’ve been teaching over the last year: AI should be a tool that accelerates our ideas, not a crutch that replaces human ingenuity. OpenAI’s advert shows that while you can use AI to prototype, augment, and support your skills, the final message must carry that unmistakable human nuance and final polish. It’s a brilliant flex of both financial muscle and creative sensibility.
Google: A Cautionary Tale of AI Hallucinations and Data Regurgitation
Google’s campaign for its Gemini AI tool tells a very different story. In one of its regional ads—for Wisconsin Cheese Mart—the company showcased Gemini on a Pixel 9, using the AI to generate a product description. The ad initially featured a wildly inaccurate statistic, claiming that Gouda cheese accounts for 50 to 60 percent of the world’s cheese consumption. Naturally, this claim raised eyebrows (and rightly so), leading to swift public criticism and a subsequent edit by Google.
But here’s the kicker: further scrutiny revealed that the product description wasn’t freshly generated at all—it was regurgitated from text that had been on the cheese mart’s website for years. Before anyone try’s to point out the floors in the technology, this is exactly how generative AI works: it operates on probabilities drawn from its training data, and if there are gaps in its data it makes probable leaps between them, and let’s be fair—yours and my data on Gouda from Wisconsin is pretty limited too - right…?
To clarify, I’m not taking sides here with Google; I’m using this as a prime example to drive home the lessons we’ve been teaching about the limitations of AI, its training sets, and its biases.
Watch Google’s Wisconsin Cheese Mart Ad:
My Thoughts:
Google’s mishap with the Gouda stat is a textbook case of AI hallucination. The system confidently spits out information that’s not only unverified but outright wrong—this is exactly why we always stress to our clients and students that you must always fact‑check AI outputs. When AI gets something wrong, it does so with all the confidence of being 100% right, and that’s both fascinating and frustrating.
This incident perfectly drives home the lesson: while AI is an incredibly powerful tool, the human element is essential to ensure authenticity and accuracy.
Final Reflections
These two Super Bowl campaigns offer a fascinating snapshot of how AI is being integrated into advertising. OpenAI’s approach flexes financial muscle and celebrates the irreplaceable human touch, while Google’s experience serves as a cautionary tale about the pitfalls of relying too heavily on AI without proper oversight.
Both cases are invaluable learning points. They remind us that while AI can be a powerful assistant, it is not infallible, and it’s not here to replace the work of humans. Whether you’re using it to generate ideas or content, the human element remains crucial to ensure authenticity and accuracy.
Feel free to share your thoughts in the comments below or ask any questions. Let’s keep the conversation going about how best to harness AI without losing our human touch!