@NebsGoodTakes
you are a stain to humanity, the world will noticeably improve the instant you leave it
Analysis: company-DoW AI deployment promises safety safeguards, but public sentiment is largely confrontational (58.69%) with limited support (12.68%).
Real-time analysis of public opinion and engagement
What the community is saying — both sides
Replies applaud Sam’s quick deal‑making as a decisive win, with many calling it a strategic “checkmate” and applauding the Department of War partnership as smart, timely leadership.
A large contingent ridicules Anthropic’s public stance and PR strategy, arguing their blog‑post approach turned a procurement into a crisis and cost them access to government contracts.
Many note both companies defend the same red lines (no mass surveillance, human accountability for force) yet diverged on contract language and negotiation posture — OpenAI found wording the DoW would accept while Anthropic pushed for broader, enforceable terms.
A vocal group worries the Pentagon’s pressure could erode safeguards over time, warning that making restrictions negotiable risks a slippery slope toward surveillance or lethal autonomy and urging strict, non‑negotiable limits.
Numerous replies ask how promised technical safeguards will be audited inside classified networks, who enforces compliance, and what happens if systems “drift” from agreed behavior.
Many frame the deal as necessary to keep technological advantage over adversaries, arguing partnership with the military is pragmatic and that safety can coexist with defense needs.
The thread mixes gloating, memes, and boastful predictions (including some saying consumers will switch to Claude), while others point out consumer subscriptions don’t replace the scale of government contracts.
Accusations of betrayal and “selling out” — a large portion of replies call Sam Altman and OpenAI traitors, accusing them of abandoning prior principles and “slithering in” to take Anthropic’s place; language is angry, personal, and often vitriolic.
Deep ethical fears about weapons and surveillance — many warn that the deal opens the door to autonomous weapons and mass domestic surveillance, framing the announcement as the moment AI is militarized.
Rapid consumer backlash and migration talk — numerous users promise to cancel subscriptions, boycott OpenAI, and switch to Anthropic/Claude, with several reporting they already have.
Questions about timing, fairness, and possible quid-pro-quo — replies repeatedly point to the timing of Anthropic’s blacklist, political donations, and the DoW’s sudden reversal, suggesting something suspicious or inconsistent.
”
Alarmist imagery and doomsday metaphors — references to “Skynet,” war crimes, and existential risk flood the replies, amplifying fear even when not all arguments are technical.
Conspiracy, insults, and political framing — alongside reasoned criticism there’s a surge of conspiratorial claims, personal attacks, and partisan accusations tying the deal to donors and power networks.
A few deferential takes on negotiation skill — a small minority acknowledge the deal as a strategic win or a successful negotiation, but they’re heavily outnumbered and treated skeptically by many.
Most popular replies, ranked by engagement
you are a stain to humanity, the world will noticeably improve the instant you leave it
Maybe learn something from Anthropic?
Sam Altman? The man who sold out humanity?
No reason to doubt you, big dog. Screenshot unrelated.
Good work Sam. But can you share what was different between your agreement and Anthropic’s?
bro really did this