@alex_prompter
paper: https://t.co/KcorNK43Vu
NeurIPS study shows 70+ LLMs converge on near-identical creative answers — an 'Artificial Hivemind'. ≈53% support the findings; urges pluralistic alignment.
Real-time analysis of public opinion and engagement
What the community is saying — both sides
many models produce the same safe, consensus‑approved outputs.
alignment pipelines optimize for the annotator average, which systematically penalizes odd or idiosyncratic answers.
this isn’t just bland phrasing but correlated failure — the same blind spots and omissions show up across models, so relying on multiple AIs does not equal independent perspectives.
people prefer many valid answers, yet current reward models compress that plurality into a single “safe” band, creating a measurable diversity deficit.
Several technical voices stress that the collapse is structural — baked into weights and training objectives — so tricks like raising sampling temperature or ensembling different models won’t reliably restore genuine variety (mode collapse).
heavy constraint framing, persona/role shifts, reference‑style prompts, iterative “yes, and” chaining, and custom style guides as effective prompt‑engineering levers to force models off their default rails.
move from single‑point alignment to pluralistic alignment — training objectives that reward coverage of valid response distributions instead of one homogenized target.
some see potential for large‑scale thought homogenization or propaganda, while others frame the problem as a solvable engineering incentive mismatch that preserves human creativity.
AI remains useful for passable, boilerplate, and efficiency tasks — but it shouldn’t be trusted as a source of original insight without human curation.
Community suggestions for next steps include control experiments (70 humans vs 70 models), multilingual probing, preventing model‑incest (AI‑trained‑on‑AI), and building personalization “injectors” that make the model’s invariant be the user’s context rather than the annotator average.
” Commenters argue that leaving out a notable model undermines the paper’s credibility and suggests possible bias.
Several replies note that similar outputs are not surprising given shared training regimes — “same data distribution + same objective function” — and frame the result as a predictable consequence of how models are trained, not evidence of a conspiracy.
A vocal group rejects the idea that AI will kill creativity, insisting humans should keep doing creative work while using AI for coding or busywork; others say better prompting or RAG can preserve originality.
Recurring accusations claim the article and tweet were generated by AI, with multiple commenters repeating “AI wrote this” as a way to dismiss the piece and its conclusions.
the study is called “slop,” “nonsense,” or worse, and authors face insults and sarcasm rather than measured critique.
More technical critiques focus on the study measuring training-data convergence, not creativity, and question benchmark choices and sample selection as reasons for the findings.
g. , models being “woke” or owned by certain groups) and tying model behavior to broader cultural battles.
A few commenters defend AI’s utility, arguing for individualized AIs or advocating practical workflows (automations, RAG) that sidestep sensational claims and emphasize tool-like value.
Most popular replies, ranked by engagement
paper: https://t.co/KcorNK43Vu
they built a dataset called INFINITY-CHAT. 26,000 real-world open-ended queries mined from actual chatbot conversations. not synthetic benchmarks. real questions people ask AI every day. creative writing, brainstorming, hypothetical scenarios, opinion questions, skill
🧠 https://t.co/4mJ78dKdtI
Your premium AI bundle to 10x your business → Prompts for marketing & business → Unlimited custom prompts → n8n automations → Weekly updates Start your free trial👇 https://t.co/ZKcpVsaTqJ
>Probabilistic machines gives the most probabilistic answers >Everybody: >AI researcher: NOOO LOOK AT THE LANGUAGE MODELLINOS THAT'S INCREDIBLE UUUH QUICK CITE MY PAPER
Not true no matter how long your thread is. •