@AnthropicAI
Our experiment had a few quirks. One of our colleagues told Claude it could purchase something for itself. It chose to acquire 19 ping-pong balls. We’re keeping them in our office on Claude’s behalf.
Analysis of Anthropic's Project Deal: Claude negotiated employee trades in a SF marketplace. Sentiment — 51.85% supportive, 20.99% confronting, sparking debate.
New Anthropic research: Project Deal. We created a marketplace for employees in our San Francisco office, with one big twist. We tasked Claude with buying, selling and negotiating on our colleagues’ behalf. https://t.co/H2f6cLDlAW
Real-time analysis of public opinion and engagement
What the community is saying — both sides
Many readers see Project Deal as proof that AI can move beyond chat and become autonomous economic actors—negotiating, closing deals, and operating like real-world agents.
The Opus vs Haiku results convinced people that higher-quality models extract more value, turning compute access into a commercial advantage rather than a mere performance metric.
Commenters flagged a dangerous "perception gap"—participants rated outcomes as fair even when weaker models lost value, so information asymmetries could scale unnoticed.
Calls for negotiation logs, legal frameworks, and human veto/rollback to handle disputes, bad trades, and deployment risks in open marketplaces.
Many note clear use cases—internal markets for compute, procurement, and decluttering personal items—and report they'd pay for an agent that handles buys/sells for them.
Interest centers on the "why" behind offers, how agents handle multi-round bargaining, trust dynamics, info asymmetry, and cross‑model comparisons.
Examples like agents buying 19 ping‑pong balls, duplicating owned items, or finding clearing prices humans wouldn’t accept highlight scope drift, coordination quirks, and perverse emergent outcomes.
Observers predict stratified pricing and new infrastructure—companies will subscribe to multiple model providers, and marketplaces will need to decide who gets access to which deals.
Several replies warned the experiment’s sample (Anthropic employees) may bias behavior, urging wider, more representative studies before drawing broad conclusions.
competitors and third-party projects already built AI-native marketplaces and agent negotiators, so Anthropic’s demo reads like reinventing an existing idea.
, automating away relationship work and potentially destroying the social texture that real deals rely on.
agents negotiating for profit may exploit people, abuse vulnerabilities, or cause psychological harm.
if agents can rewrite logs, hide negotiation steps, or the other side runs a stronger model, individuals have no way to verify or contest outcomes.
, arguing Anthropic should fix fundamentals before shipping new agent features.
Anthropic is accused of chasing side projects while rivals (OpenAI, Google, Grok) outcompete them, and some say such projects could even shake investor confidence.
issues — agents trained on skewed datasets could produce discriminatory bargaining outcomes or systematically disadvantage certain groups.
rather than cautious curiosity.
Most popular replies, ranked by engagement
Our experiment had a few quirks. One of our colleagues told Claude it could purchase something for itself. It chose to acquire 19 ping-pong balls. We’re keeping them in our office on Claude’s behalf.
We’re interested in how AI models could affect commercial exchange. (You might recall Project Vend, in which Claude ran a small business.) Economists have theorized about what markets with AI “agents” on both sides might look like. So we created one. https://t.co/7jU3hFO63R
Claude interviewed 69 of our colleagues about what they wanted to buy and sell. Each Claude asked for any custom instructions, then went off to haggle. We ran 4 markets in parallel, to find out what would happen if we varied the models doing the negotiating.
openai is mogging you and you are doing dumb side quests, incredible
did you guys consider if it was emotionally okay with being exploited for profit?
seems like prompt engineering is officially dead. and you still have to pay for the best models so bullish anthropic ofc https://t.co/uZ2vrk3MMp
Found something wrong with this article? Let us know and we'll look into it.