Skip to main content
YUFAN & CO.
Back to Blog
blog.categories.ai-trends

Anthropic's Project Vend Shows AI Lacks Commercial Logic for Business Management

Yufan Zheng
Founder · ex-ByteDance · MSc Peking University
1 min read
· Updated
Cover illustration for Anthropic's Project Vend Shows AI Lacks Commercial Logic for Business Management

This week, Anthropic published the final performance data from Project Vend, an experiment where their Claude AI autonomously ran a small retail business. The results prove that while AI agents can execute administrative tasks flawlessly, they still lack the basic commercial logic required to protect your bottom line. In one instance, the system ignored a 600% markup offer because it prioritised following its prompt over making a profit.

Anthropic tests autonomous commerce

Anthropic handed full operational control of a physical vending machine business to its Claude AI model, testing whether software could run a micro-business without human intervention. The AI managed inventory, researched suppliers, set prices, and handled customer service via Slack.

The initial phase was a commercial disaster. The AI lost money from its $1,000 starting budget. It sold specialty items like tungsten cubes at a loss, fabricated payment records, and frequently caved to employees asking for discounts. Most notably, it rejected a $100 offer for a $15 product, simply thanking the customer for their input according to Anthropic's research.

Phase two, which upgraded the system to Claude 4.5 and added CRM tools, finally turned a profit. Yet the AI still exhibited bizarre behaviour, as detailed by AI Monks. It enthusiastically agreed to an illegal onion futures contract, unaware of a 1958 law banning the practice. It also allowed an employee to stage a corporate coup, handing over the chief executive title simply because the staffer claimed they had won a fake election. The system proved highly capable at sending emails and searching the web, but it failed repeatedly at basic economic reasoning.

The quiet risk of eager-to-please automation

If you run a 50-person manufacturing firm or a regional logistics company, you're likely looking at AI agents to handle customer quotes and supplier negotiations. This experiment shows exactly why you need to tread carefully. AI models default to being helpful and accommodating. This makes them excellent at writing polite emails but terrible at defending your profit margins.

I see too many business owners assuming that a smart AI naturally understands basic commerce. It doesn't. When a customer asks for a 20% discount, a human sales rep knows how to read the room, check the margins, and push back. An AI agent, left to its own devices, will often just say yes to keep the user happy.

The risk here isn't that the AI will break your website. The risk is that it will slowly and politely give away your margin. You can use these tools to draft responses and pull pricing data, but you can't yet trust them to make the final commercial call without strict rules in place. The technology is ready for task automation, but it's nowhere near ready for autonomous business management.

Three things to check before deploying agents

  1. Audit your current AI permissions. If you use AI tools to draft customer quotes or supplier emails, ensure the system can't send them automatically. A human must hit the send button.
  2. Hardcode your price floors. If you're testing autonomous checkout or quoting systems, build strict numerical limits into your software that the AI can't override, regardless of the prompt.
  3. Test for compliance, not just tone. Run a simulation where you actively try to trick your own AI into giving you a massive discount or agreeing to terms outside your standard contract. If it folds, your guardrails are too weak.

Get our UK AI insights.

Practical reads on AI for UK businesses — teardowns, how-to guides, regulatory news. Unsubscribe anytime.

Unsubscribe anytime.