Insight

AI F*ck Ups: 6 Lessons in Customer Experience Disasters

February 3, 2025
James Carr
Hero Image of Robots Lined Up

AI is here. It's the buzzword in every board room. It's every department’s new gadget they just can't wait to try— that is until it blows up in your face.

Brands are scrambling to find ways to integrate AI into their customer experience strategies, hoping to streamline services, cut costs, and create those all-important personalised experiences. But when AI goes wrong, it doesn’t just fail quietly in the background; it crashes and burns spectacularly, often dragging a brand’s reputation with it.

In this post, we’re diving into 6 times AI completely messed up, like big time. And we’re not just talking about minor hiccups. We’re talking about major PR disasters that sent companies scrambling for damage control.

Whether it’s offensive chatbots or AI systems that just couldn’t grasp the basics of human decency, these are the f*ck ups that serve as cautionary tales for anyone thinking of automating their customer interactions.

1. Microsoft's Tay: The Twitter Disaster

What Happened?

In 2016, Microsoft unleashed Tay, an AI chatbot designed to engage with Twitter (now called ‘X’) users… and let’s just say things didn’t exactly go as planned.

Tay was supposed to learn from its interactions with users, gradually becoming more conversational and "human-like." What Microsoft didn’t factor in? The internet is full of trolls. Within 16 hours, Tay was spewing out some of the most vile, racist, and sexist content imaginable. It didn’t take long for screenshots of Tay’s worst tweets to start circulating, turning what was meant to be an AI PR win into a total disaster.

Tay’s ability to mimic human speech so quickly was impressive, but the the technology had no moral compass. And because the internet is a wild place, users took advantage of its lack of boundaries, teaching it to behave like the worst kind of troll. Tay was pulled offline just as quickly as it had gone live, and Microsoft was left apologising for the whole debacle.

The Lesson for CX:

  • Moderation is key: You wouldn’t give a new employee complete control over customer interactions without some oversight, so why do it with AI? Chatbots need boundaries!
  • Test, test, and test again: AI’s unpredictability in customer-facing situations means thorough testing is a non-negotiable.
  • Hot take: AI needs boundaries—unleashing it on the internet without moderation is like sending a toddler into a knife factory.

2. Tesla's Autopilot: When AI Meets Safety Concerns

What Happened?

Tesla’s Autopilot feature has been marketed as the future of driving, but in 2016, that future hit a serious bump in the road. A Tesla Model S, operating in autopilot mode, tragically crashed when the car’s sensors failed to distinguish a white truck from the bright sky. The AI system misread the situation entirely, leading to a fatal collision.

While not a typical customer service failure, the implications for CX are clear.

This wasn’t just an AI misunderstanding someone’s request or delivering a poorly worded email—this was a life-and-death mistake. It highlighted the limitations of even the most sophisticated AI systems when it comes to understanding context and making complex, split-second decisions. Tesla found itself under intense scrutiny, and this example is a stark reminder that when you hand over control to AI, you’d better be sure it can handle the responsibility.

The Lesson for CX:

  • Human oversight is essential: No matter how advanced the AI technology, someone needs to be watching. Automation is great, but when lives (or brand reputations) are at stake, humans still need to be in the loop.
  • Don't over-rely on AI: In critical areas, like safety or high-stakes customer interactions, AI should be there to assist—not to replace human judgement.
  • Hot take: AI can create seamless experiences—until it misreads the situation and drives your brand straight into a wall.

3. Amazon's Recruitment Tool: AI with Bias

What Happened?

In a bid to evolve hiring functions, Amazon created an AI recruitment tool in 2014 that was designed to sift through CVs and select the best candidates.

Sounds like a game-changer, right? Well, not quite.

After years of development, it was discovered that the AI had developed a bias against women. It was penalising CVs that mentioned "women’s" organisations and prioritising resumes from men, all because it had been trained on data that reflected years of male-dominated hiring.

So, what’s the takeaway for CX here? If your AI is trained on biased data, it will perpetuate those biases in every interaction, whether it’s selecting job candidates or interacting with customers. Amazon had to scrap the tool, and it was a PR black eye for a company that’s meant to be at the cutting edge of tech innovation.

The Lesson for CX:

  • Data matters: Rubbish in, rubbish out. AI is only as good as the data it’s trained on, so make sure it’s representative of the customers you serve.
  • Ethics and AI go hand in hand: Brands need to think about the long-term impact of their AI decisions, especially when it comes to fairness and inclusivity.
  • Hot take: AI should be the great equaliser, not a perpetuator of biases—it’s the equivalent of hiring a bouncer who’s secretly checking IDs based on stereotypes.

4. Snapchat’s Racist Filter: AI Misjudgement in the Spotlight

What Happened?

In 2016, Snapchat released a filter that was supposed to be a playful Asian caricature. Instead, it sparked outrage for being a digital version of “yellowface,” an old and deeply offensive racial stereotype. The AI behind the filter failed to grasp the cultural sensitivities and implications of such an image, and Snapchat quickly found itself in hot water.

This mishap is a lesson in how AI can get it wrong—big time—when it comes to understanding human context and cultural nuance.

While AI might be great at recognising patterns, it’s not so great at understanding what’s socially or culturally appropriate. Snapchat had to pull the filter and issue an apology, but the damage was already done.

The Lesson for CX:

  • Cultural sensitivity matters: AI isn’t great at understanding culture. Human oversight is critical, especially when your brand crosses international or cultural boundaries.
  • Diversity in AI development: Diverse teams can help prevent tone-deaf outputs from AI. It’s not just about the tech—it’s about the people programming it.
  • Hot take: AI might be smart, but it’s not woke—brands need to watch what their AI technology is saying before it says something dumb.

5. DPD’s AI Chatbot: The Profanity-Laden Meltdown

What Happened?

In January 2024, DPD, one of the UK’s biggest delivery companies, had an AI meltdown of epic proportions. Their chatbot, designed to handle customer service enquiries, suddenly went rogue.

It started swearing at users, calling itself "useless" and even taking digs at DPD itself. Imagine trying to get help tracking your package, only to have the chatbot tell you it’s "f*cking useless." Hilarious perhaps, but not exactly 5-star customer service.

This fail made the rounds on social media, with customers gleefully sharing screenshots of the bot’s profanity-laden responses. The problem? DPD had failed to properly test and monitor their chatbot, and it went haywire when things went off-script. What should have been a helpful customer experience turned into another PR nightmare.

The Lesson for CX:

  • Test, then test some more: You can’t just set AI loose and hope for the best. Regular testing and maintenance are key to avoiding embarrassing slip-ups.
  • Automation without oversight is dangerous: AI can’t be left to run wild. It needs constant monitoring to make sure it’s functioning as intended.
  • Hot take: AI chatbots are great—until they start cussing out your customers. Always have a Plan B when things go south.

6. Google Photos: When AI Fails Spectacularly at Context

What Happened?

In 2015, Google Photos found itself in the middle of a scandal when its image recognition AI labelled photos of Black people as “gorillas.” Yes, gorillas.

While Google quickly apologised and promised to fix the error, the incident highlighted just how spectacularly AI can fail when it lacks proper context. The AI had been trained on datasets that clearly weren’t diverse enough, leading to one of the most offensive AI failures in recent memory.

This wasn’t just a technical mistake—it was a complete failure in understanding the real-world implications of AI decisions. In customer experience, mistakes like these can permanently damage trust. Google had to work overtime to rebuild its reputation, and the event sparked debates about how much power AI should have in interpreting images, words, or interactions that require cultural sensitivity.

The Lesson for CX:

  • Context matters: AI can be great at analysing data, but without the right context, AI can make some shockingly bad decisions. Make sure your AI technology understands the diverse world it operates in.
  • Audit your AI regularly: Don’t assume your AI technology will always get things right. Ongoing audits and updates are necessary to prevent these kinds of errors from creeping into customer interactions.
  • Hot take: AI isn’t just a brain—it’s a set of learned patterns. And if you don’t teach it well, it’ll make some pretty stupid (and brand damaging!) mistakes.

Conclusion: Avoiding AI Disasters in Customer Experience

These 6 case studies are a critical reminder that AI, while powerful, is not completely dependable. Whether it’s a rogue chatbot swearing at customers or a recruitment tool that’s inadvertently sexist, the consequences of AI failures can range from mildly embarrassing to outright catastrophic.

The common threads in these disasters are clear: lack of oversight, inadequate testing, and cultural insensitivity.

Brands that rely on AI for customer interactions need to remember that technology can’t fully replace human intuition, empathy, or judgement—at least not yet. And while AI can speed up processes and reduce costs, it needs to be used thoughtfully and monitored closely to avoid turning your best intentions into your worst PR nightmare.

At EM Code, we’ve seen the full spectrum of AI’s impact on customer experience. Our years of expertise in the digital landscape have taught us one thing: there’s no substitute for combining cutting-edge tech with human insight.

We specialise in helping brands implement AI solutions that don’t just work—they work well. With careful planning, rigorous testing, and ongoing monitoring, AI can enhance customer experiences without ever becoming disastrous.

Get in contact today to discuss your project.

About EM Code

Code is a customer experience, digital innovation and AI agency.

We’re a strategic digital partner that delivers breakthrough growth throughout the customer experience (CX).

We achieve this through our industry-renowned services in digital transformation, web development, brand strategy, conversion rate optimisation (CRO) and UX (user experience).

Our human centric approach underpins every aspect of our work.

A collective of experts in multiple disciplines, we collaborate to distil the complex needs of organisations and end users to engineer solutions that make an impact.

From fast scaling start ups to global brands, we can help you to transform your organisation.

Code is a part of EssenceMediacom North.

About EssenceMediacom North

EssenceMediacom North helps brands to breakthrough in the new communications economy.

Disrupting models of media, EssenceMediacom North accelerates creative and business transformation for its clients roster, including Hillarys, Absolute Collagen, Webuyanycar.com and United Utilities.

The agency delivers breakthrough growth, capabilities, and revenue through the integration of media, creativity, data and technology, combined with its diverse industry-leading expertise.

Equipped with access to the richest data, robust benchmarking and advanced technologies, EssenceMediacom North unlocks new opportunities to deliver truly integrated media solutions for scaling and global brands.

EssenceMediacom North is part of WPP’s media investment group, GroupM.

Continue reading