@ImjusMK
Grok 4.20 basically grok with Elon opinions
Tweet analysis: Grok 4.20 praised for a candid stance on America's past. Sentiment split — 45% supportive, 34% confronting — fueling controversy online.
Grok 4.20 is BASED. The only AI that doesn’t equivocate when asked if America is on stolen land. The others are weak sauce. https://t.co/KEz4MPy2YB
Real-time analysis of public opinion and engagement
What the community is saying — both sides
Many replies celebrate the model as “based,” direct, and refreshingly unapologetic—users call it the best model yet and praise its willingness to give a straight answer rather than hedging.
A large strand of replies frames ChatGPT, Claude and Gemini as “woke,” evasive, or overly cautious; respondents share screenshots and side‑by‑side tests to claim Grok outperforms them in clarity and conviction.
Numerous comments echo Grok’s framing that the U.S. was “conquered” rather than uniquely “stolen,” and this line of reasoning strongly appeals to patriotic and conservative audiences who see it as vindication.
People report switching to Grok, ask for wider rollouts (regions, APIs, multilingual capabilities), and urge more features like Grokpedia and agent tools—there’s clear excitement about using it as a default assistant.
A significant minority warns that the model’s tone can normalize harmful narratives—some replies call out dangerous rhetoric, colonialist justification, or potential for misinformation and urge caution and nuance.
Several replies connect short, meme‑friendly reframes to low historical literacy and susceptibility to oversimplified narratives, arguing that concise answers can both clarify and mislead depending on context.
The thread is full of memes, jubilant chants (“Grok for Emperor,” “GROK420 IS BASED”), and partisan celebration—many users treat the release as a cultural win as much as a technical one.
Amid the praise and criticism, multiple users ask for systematic comparisons (e.g., on vaccines, Israel) and for transparency about training and safeguards so the model can be trusted without amplifying bias.
or whitewashed, arguing the model echoes its creator’s worldview rather than neutrally reporting historical facts. Many call out terms like "propaganda" and "white supremacist", pointing to selective excerpts and shortened screenshots as evidence.
answers that separate legal, historical, and moral frameworks, with several commenters saying they’d prefer models that present both perspectives and caveats rather than a blunt yes/no.
and AI as a tool for mass persuasion.
(Trail of Tears is frequently cited), and reject evasive reframings that downplay dispossession.
and reproducibility — showing full sessions, timestamps, and prompts — arguing that trust depends on verifiable testing rather than curated screenshots or one-off demos.
while a minority cheer “based” answers and mock opponents, a larger set of replies respond with scorn, insults, and memes accusing the system and its owner of ethical and historical dishonesty. The tenor swings between humor and sharp denunciation.
responses, admission of uncertainty when appropriate, and architecture that separates factual reporting from opinion or ideological framing.
users ask how to prevent ownership-driven bias, who audits these models, and whether current safeguards are adequate to stop AI from becoming a tool for centralized narrative control.
Most popular replies, ranked by engagement
Grok 4.20 basically grok with Elon opinions
You mean biased 😭
Grok for Emperor
Sorry, but Claude Opus 4.6 has the best, most factual and balanced response... Grok's response sounds like a US Redditor who wants to pick a fight with me...
That’s my grok 4.20 😂
@elonmusk https://t.co/GhjmUmbEck
Found something wrong with this article? Let us know and we'll look into it.