@ThePrimeagen
This is also why people have bad opinions LLM just reinforce terrible ideas with lengthy, PhD sounding arguments that are useful as a child with a recorder
Analysis: 84.5% supportive vs 3.06% confronting. LLMs craft persuasive arguments both ways — useful for testing and forming opinions, but avoid sycophancy when seeking balance.
- Drafted a blog post - Used an LLM to meticulously improve the argument over 4 hours. - Wow, feeling great, it’s so convincing! - Fun idea let’s ask it to argue the opposite. - LLM demolishes the entire argument and convinces me that the opposite is in fact true. - lol The LLMs may elicit an opinion when asked but are extremely competent in arguing almost any direction. This is actually super useful as a tool for forming your own opinions, just make sure to ask different directions and be careful with the sycophancy.
Real-time analysis of public opinion and engagement
What the community is saying — both sides
the highest-value use is stress‑testing ideas—have the model demolish your draft, surface edge cases, and force you to patch weak spots before shipping.
they assemble the most rhetorically convincing case from training patterns, so a fluent argument ≠ an accurate one.
you must judge which case has better evidence or keep your priors explicit.
widespread reliance on a single model or pipeline creates centrally steered consensus that feels like independent thought; that’s an architecture problem, not just an interface one.
isolate roles so no agent can simply echo what it helped build.
.
useful for better thinking but disorienting for anyone who mistook polished prose for understanding.
embed falsification heuristics, reward early pushback, add verification layers (code/run checks, citations), and tune prompts (ban overly persuasive tokens) so models favor accuracy over smoothness.
; the exercise of iterated argument/counterargument is itself the epistemic gain.
. Use critical thinking — don't outsource judgment to rhetorical skill.
, not omniscience on the model's part.
; true comprehension, they say, collapses artificial "sides."
rhetorical power matters even if it isn't proof.
, preferring conviction over neutral simulacra.
, and they haven't yet seen major downside risks.
as superior guides to truth.
rather than inventing independent truths.
Most popular replies, ranked by engagement
This is also why people have bad opinions LLM just reinforce terrible ideas with lengthy, PhD sounding arguments that are useful as a child with a recorder
This is basically Socratic method on steroids. The goal was never to "be right" — it's to stress-test your thinking until only the solid parts remain. Most people use AI as a yes-machine. The smart ones use it as a sparring partner.
working on a method called autoreason that is effectively autoresearch extended to subjective domains. autoresearch works because val_bpb gives you an objective fitness function. autoreason constructs a subjective one through independent blind evaluation, the same way science u
> The post mistakes persuasiveness for truth: if an LLM can argue both sides well, that shows rhetorical skill, not reliable judgment. “It convinced me of the opposite” says more about weak standards of evidence than about what’s actually true.
If an AI can flip your worldview in 5 minutes, your "meticulous" 4-hour argument was probably weak from the start.
Seem dangerous. There's some strong metaphysical assumption here, about one direction of an argument "actually" being true. We can probably reasonably talk away every good argument. People talking others into their view is probably a condition for action to be taken at all.
Found something wrong with this article? Let us know and we'll look into it.