@Chaoswak
This is exactly what we do not want. Time for Scam Altman to go.
Reactions to Sam Altman's remark on AI knowing personal lives: 26.9% supportive, 48.3% confronting. Discussion centers on privacy, autonomy and trust.
Sam Altman: "We are no longer that far away from an [AI] model that.. knows ... about your life... knows about what you're doing... [and] what you care about" https://t.co/QPhLnPZor9
Real-time analysis of public opinion and engagement
What the community is saying — both sides
Many replies insist personalization and memory-enabled AI aren’t futuristic — Google, social platforms, Gemini and current “memory” features already assemble deep profiles and behave like early personalized assistants.
A large thread warns this level of personalization equates to mass surveillance — “privacy left the room,” data being hoarded, and one company becoming your memory and gatekeeper.
Multiple voices say the capability exists but the real challenge is robust consent, granular revocation, transparency, and provable “forgetting” or auditability.
Some replies celebrate the value — an AI that knows your routines could be an immensely powerful personal assistant and product-market fit.
Commenters expect enterprise deals, intent-data monetization, procurement rules, and traders exploiting behavioral signals — meaning business models, not just tech, will drive adoption.
Several replies call for governance — oversight, rules, and accountability for teams building capabilities they admit they don’t fully understand.
A recurring view is that data access is solved; the hard step is letting AI act on that data autonomously and safely, including how guardrails behave or are bypassed.
People fear modeling leads to prediction and then to shaping decisions — from nudges to subtle market influence — raising ethical and power-concentration problems.
Replies range from “exactly what I wanted” and “progress” to “terrified” and “assustador,” reflecting a sharp divide between enthusiasm for convenience and deep anxiety about control.
Replies warn an AI that "knows your life in real time" is invasive — "personalization = surveillance with a smile" and handing companies that power is unacceptable.
Sam Altman, Zuckerberg and others are personally blamed as grifters, sociopaths or liars; corporate motives and PR are seen as self-serving and dangerous.
Some voices want AI focused on science, curing disease, solving climate change and understanding the universe — not on monitoring people or generating fake art.
Many demand strict oversight, cancellation of projects, or even pulling the plug on specific companies rather than letting unchecked platforms grow.
A subset champions going off-grid, ditching platforms, using privacy hygiene (multiple emails, no real name) or abandoning AI tools entirely.
Several replies argue this is incremental — "Google/Chrome already harvests data" and current AI is mostly data-crunching hype, not mystical omniscience.
Users insist friends understand them better than algorithms and question why anyone would replace real relationships with predictive software.
Concern that these tools will be sold to militaries, governments and police to target civilians and amplify coercive power.
A vocal minority resorts to apocalyptic language and personal accusations (from "anti‑Christ" to criminal claims), framing leaders and projects as existential or moral evils.
Many replies use sarcasm, jokes and ridicule — from "used car salesman" jabs to mocking body language — to undermine credibility and push back culturally.
Most popular replies, ranked by engagement
This is exactly what we do not want. Time for Scam Altman to go.
Altman’s lawyer in the court
These people are nuts and we need some oversight of them.
Anybody else remember consenting or voting for this?
Remember when Mark Zuckerberg called users "dumb fucks" for sharing their personal data on FB? This is that.
This shit is getting to a point of conspiracy theories coming to life…👍
Found something wrong with this article? Let us know and we'll look into it.