AI
AI Analysis
Live Data

Anthropic Project Deal: Claude Negotiates Employee Trades

Analysis of Anthropic's Project Deal: Claude negotiated employee trades in a SF marketplace. Sentiment — 51.85% supportive, 20.99% confronting, sparking debate.

@AnthropicAIposted on X

New Anthropic research: Project Deal. We created a marketplace for employees in our San Francisco office, with one big twist. We tasked Claude with buying, selling and negotiating on our colleagues’ behalf. https://t.co/H2f6cLDlAW

View original tweet on X →

Community Sentiment Analysis

Real-time analysis of public opinion and engagement

Sentiment Distribution

73% Engaged
52% Positive
21% Negative
Positive
52%
Negative
21%
Neutral
27%

Key Takeaways

What the community is saying — both sides

Supporting

1

Milestone for agentic AI:

Many readers see Project Deal as proof that AI can move beyond chat and become autonomous economic actors—negotiating, closing deals, and operating like real-world agents.

2

Model-quality gives a real edge:

The Opus vs Haiku results convinced people that higher-quality models extract more value, turning compute access into a commercial advantage rather than a mere performance metric.

3

Invisible advantage worries:

Commenters flagged a dangerous "perception gap"—participants rated outcomes as fair even when weaker models lost value, so information asymmetries could scale unnoticed.

4

Policy, audit, and rollback are necessary:

Calls for negotiation logs, legal frameworks, and human veto/rollback to handle disputes, bad trades, and deployment risks in open marketplaces.

5

Practical internal-market utility and demand:

Many note clear use cases—internal markets for compute, procurement, and decluttering personal items—and report they'd pay for an agent that handles buys/sells for them.

6

Researchers want the negotiation logs and multi‑turn data:

Interest centers on the "why" behind offers, how agents handle multi-round bargaining, trust dynamics, info asymmetry, and cross‑model comparisons.

7

Distinct failure modes cropped up:

Examples like agents buying 19 ping‑pong balls, duplicating owned items, or finding clearing prices humans wouldn’t accept highlight scope drift, coordination quirks, and perverse emergent outcomes.

8

Business and market implications:

Observers predict stratified pricing and new infrastructure—companies will subscribe to multiple model providers, and marketplaces will need to decide who gets access to which deals.

9

Questions about generalizability:

Several replies warned the experiment’s sample (Anthropic employees) may bias behavior, urging wider, more representative studies before drawing broad conclusions.

Opposing

1

not novel

competitors and third-party projects already built AI-native marketplaces and agent negotiators, so Anthropic’s demo reads like reinventing an existing idea.

2

replace negotiators and sales roles

, automating away relationship work and potentially destroying the social texture that real deals rely on.

3

ethical and emotional concerns

agents negotiating for profit may exploit people, abuse vulnerabilities, or cause psychological harm.

4

transparency and auditability

if agents can rewrite logs, hide negotiation steps, or the other side runs a stronger model, individuals have no way to verify or contest outcomes.

5

bugs, API errors, quota/pricing pain, and poor support

, arguing Anthropic should fix fundamentals before shipping new agent features.

6

market and competitive risks

Anthropic is accused of chasing side projects while rivals (OpenAI, Google, Grok) outcompete them, and some say such projects could even shake investor confidence.

7

fairness and bias

issues — agents trained on skewed datasets could produce discriminatory bargaining outcomes or systematically disadvantage certain groups.

8

public skepticism and distrust

rather than cautious curiosity.

Top Reactions

Most popular replies, ranked by engagement

A

@AnthropicAI

Supporting

Our experiment had a few quirks. One of our colleagues told Claude it could purchase something for itself. It chose to acquire 19 ping-pong balls. We’re keeping them in our office on Claude’s behalf.

778
25
1.3M
A

@AnthropicAI

Supporting

We’re interested in how AI models could affect commercial exchange. (You might recall Project Vend, in which Claude ran a small business.) Economists have theorized about what markets with AI “agents” on both sides might look like. So we created one. https://t.co/7jU3hFO63R

498
5
166.3K
A

@AnthropicAI

Supporting

Claude interviewed 69 of our colleagues about what they wanted to buy and sell. Each Claude asked for any custom instructions, then went off to haggle. We ran 4 markets in parallel, to find out what would happen if we varied the models doing the negotiating.

446
12
78.7K
T

@thekitze

Opposing

openai is mogging you and you are doing dumb side quests, incredible

73
4
2.8K
1

@1a1n1d1y

Opposing

did you guys consider if it was emotionally okay with being exploited for profit?

41
0
2.5K
G

@gabriel_horwitz

Opposing

seems like prompt engineering is officially dead. and you still have to pay for the best models so bullish anthropic ofc https://t.co/uZ2vrk3MMp

4
1
1.8K

This article was AI-generated from real-time signals discovered by PureFeed.

PureFeed scans X/Twitter 24/7 and turns the noise into actionable intelligence. Create your own signals and get a personalized feed of what actually matters.

Report an Issue

Found something wrong with this article? Let us know and we'll look into it.