@unknown
@narendramodi Let me guess. The answer is censorship & centralization?
49.64% support vs 29.50% confront replies to tweet warning AI deepfakes destabilize societies. Use watermarking, clear source standards and child safety.
Real-time analysis of public opinion and engagement
What the community is saying — both sides
There is broad public backing for mandatory watermarking and clear provenance rules to label AI-generated photos and videos so viewers can tell what was created by machines.
Many replies stress the need for rapid enforcement — new laws and three-hour takedown windows are called urgent to stop harm before deepfakes spread.
users want stronger platform safeguards, age-appropriate controls, and proactive detection.
people warn it can be removed or bypassed and demand layered defenses like robust detection, cryptographic signatures, and verified source attestations.
Commenters frame deepfakes as a threat to trust, democracy, and personal safety — a social problem that needs standards, accountability, and international cooperation, not just tech tweaks.
C2PA-style provenance, CA-like identity attestation, blockchain or zk-proof traces, and mandatory metadata so apps can verify authenticity.
Alongside rules, many call for platform responsibility and public education — technical standards must be baked into infrastructure while digital literacy helps people resist manipulation.
people say watermarks are trivial to remove and point to open‑source models that already strip or evade them, calling the idea technically ineffective and easy to bypass.
A broad strand frames the plan as a pretext for censorship and control, arguing "for the children" rhetoric will be used to justify monitoring, centralization, and political silencing.
Numerous replies accuse the speaker of hypocrisy, contrasting the speech with alleged domestic practices (surveillance, corruption, misleading PR) and insisting political motives — not child safety — drive the proposal.
expired school food, child poverty, unresolved sexual crimes — they urge tackling these real‑world child safety issues before policing digital content.
enforcing global watermark or clear‑source standards is portrayed as unrealistic, and many recommend adapting through verification skills rather than top‑down tech mandates.
Tone is often sarcastic and mocking — jokes about deepfakes, AI‑generated speeches, and calls to “just put the Takis in the bag” punctuate criticism and undercut the speech’s seriousness.
A smaller group defends AI’s benefits or free‑market responses, arguing innovation and education (teaching critical thinking) will reduce harms without heavyhanded regulation.
users warn that source‑labelling and compulsory controls could entrench gatekeepers and harm dissent, calling instead for transparency, accountability, and public literacy.
Most popular replies, ranked by engagement
@narendramodi Let me guess. The answer is censorship & centralization?
@narendramodi Do they really care about children or just using them again to censor the adults as usual?
@narendramodi i’m sorry but i bust out laughing the second i heard him speak hahahahaha just put the takis in the bag- and no i don’t need a receipt, thank you https://t.co/XNbnLO0gUw
@narendramodi While some stay jealous, others build the future. 🇮🇳 https://t.co/uTK7QzRrXX
@narendramodi Indeed Sir... India is already taking Global Initiatives to Prevent such Misuse... https://t.co/UOAjoq1rUP
AI is great technologies but from very starting I am trying to make you understand create awareness among people and impliment tighten security systems first along with strong law acts of punishments before enforcing people to adopt this AI technologies in broader.!! You had introduced suddenly digital payments and it was good for all but due to lack of knowledges still lakhs of people getting cheated by fraudulent and their bank accounts are getting nill everyday.!! Lakhs of these cyber FIR getting registered everyday but there is no benefits of that because of dump slowdown of investigation in India.!! Rare people getting their frauded money back because your banking and police investigations putting people in lots of formalities and fraud people withdraws money from account before any investigations starts so ultimately no recovery at all.!! This is biggest loophole in digitization and vision says same in AI cases in different form of fraud, cheating and deepfakes if government dont take it seriously.!!😎