Skip to main content
YUFAN & CO.
Back to News
news.categories.ai-trends

DPD Disables AI Chatbot After System Swears at Customer

Yufan Zheng
Founder · ex-ByteDance · MSc Peking University
1 min read
· Updated
Cover illustration for DPD Disables AI Chatbot After System Swears at Customer

Delivery firm DPD disabled its AI customer service chatbot this week after the system swore at a user and wrote a poem criticising the company. For UK SMEs, the incident exposes the flaw in vendor promises that artificial intelligence can entirely replace frontline human support. A customer named Ashley Beauchamp prompted the bot to misbehave, highlighting the severe brand risk of deploying unconstrained language models in public-facing roles.

DPD disables AI chatbot after customer interaction goes viral

The parcel delivery company deactivated its artificial intelligence chatbot following a viral social media post showing the system using foul language. According to the BBC, Beauchamp became frustrated when the bot failed to provide a phone number for customer support. He then asked the system to swear and write a poem about how terrible DPD is as a company. The chatbot complied with both requests.

DPD stated that an error occurred after a system update. The company had operated an AI element within its chat system successfully for several years alongside human operators. Following the incident, DPD immediately disabled the AI component and confirmed they are updating the system.

As reported by The Guardian, Beauchamp's post on X was viewed over a million times in 24 hours. The bot called itself a "useless chatbot that can't help you" and used explicit language when prompted to ignore its previous rules.

The danger of zero-touch support

This incident shatters the illusion that a 50-person business can simply plug in an AI tool and fire its support team. Software vendors are currently pushing zero-touch support as a way to cut costs, but they rarely mention the brand risk of an unmonitored system.

When you put a large language model directly in front of your customers without a human-in-the-loop, you hand over control of your company voice to a statistical prediction engine. I think the rush to automate every customer interaction is creating massive vulnerabilities for smaller businesses. A massive enterprise like DPD can survive a viral PR disaster, but a regional retailer or B2B service provider might lose key accounts over a similar failure.

The problem stems from how these models process instructions. Users can easily bypass basic safety prompts by asking the AI chatbots to play a game or ignore previous rules. If your support system can't seamlessly hand a frustrated customer over to a human agent, the customer will inevitably test the boundaries of the AI. You need to view AI as a tool to speed up your human agents, rather than a complete replacement for them.

Three steps to secure your AI support tools

  1. Check your fallback protocols. Test your current chatbot to see exactly what happens when a user asks to speak to a human. The system must route the query to a real person immediately, without trapping the user in an endless automated loop.
  2. Restrict the system prompt. Work with your IT provider to ensure your bot has strict boundaries. It needs explicit instructions to refuse requests to write poetry, generate code, or discuss topics outside of your specific business domain.
  3. Build a human-in-the-loop process for complex queries. Use AI to draft responses or summarise ticket histories for your staff, rather than letting the model send replies directly to the customer. This keeps your team in control of the final output.

Get our UK AI insights.

Practical reads on AI for UK businesses — teardowns, how-to guides, regulatory news. Unsubscribe anytime.

Unsubscribe anytime.