AI
AI Analysis
Live Data

Grok AI Diagnoses: Public Divided Over Medical Claims Online

Tweet analysis: claim that Grok diagnoses X-rays/MRIs: 48.9% support, 24.6% confront. Public shows a mix of trust, doubt and debate over AI accuracy in medicine.

@cb_dogeposted on X

"You can upload your xrays or MRI images to Grok and it will give you a medical diagnosis. I have seen cases where it's actually better than what doctors tell you." https://t.co/nJJ5E7D72M

View original tweet on X →

Community Sentiment Analysis

Real-time analysis of public opinion and engagement

Sentiment Distribution

74% Engaged
49% Positive
25% Negative
Positive
49%
Negative
25%
Neutral
27%

Key Takeaways

What the community is saying — both sides

Supporting

1

Widespread amazement and praise for Grok’s medical abilities:

replies are filled with applause, emojis and short endorsements calling Grok “amazing,” “the best,” and a “wonder MD,” with many users urging others to “just Grok it.”

2

Numerous personal success stories and concrete examples:

people describe uploading X‑rays, MRIs and blood tests that Grok diagnosed in seconds—cases cited include appendicitis, fractures and tricky chronic conditions—often matching or outpacing human reports.

3

Speed and accessibility framed as life‑saving benefits:

multiple posts emphasize seconds‑fast results versus days of waiting, suggesting quicker triage, faster ER referrals and potential to avert serious outcomes.

4

Growing trust and preference over some clinicians:

many commenters say they now trust Grok more than certain doctors, blaming overworked or inattentive clinicians and pointing to COVID-era credibility loss in medicine.

5

Legal, regulatory and liability questions:

users raise concerns about practicing medicine without a license, insurance acceptance, potential lawsuits and how AI diagnoses will be handled in surgical or regulated contexts.

6

Job‑impact debate and calls for responsible use:

a strand of replies fears doctor displacement, while others argue Grok should augment clinicians; several physicians call for medical training to include AI and for tools to be used alongside professional care.

7

High curiosity about adoption and practical use:

people ask how to upload images, express interest in running mobile clinics, cross‑referencing medications and international use, showing strong demand for accessible workflows.

8

Enthusiasm tempered by caution:

while reactions lean heavily toward excitement and gratitude, multiple voices remind readers that Grok can err and that responsible, regulated integration with human clinicians is important.

Opposing

1

High safety anxiety:

Many replies warn that Grok should not be trusted as a standalone diagnostic — users repeatedly call it dangerous, insist AI can’t replace clinical context, and urge people not to act on AI results without a physician.

2

Frequent reports of errors and hallucinations:

Several people recount concrete misdiagnoses (wrong MRI readings, weird image ID mistakes) and accuse Grok of “lying” or inventing results, eroding confidence in its outputs.

3

Accountability and liability worries:

Commenters ask who’s responsible if AI is wrong, predict malpractice waves, and emphasize that a company or clinician should be answerable — not an unaccountable model.

4

Privacy and data-security fears:

A strong thread of concern centers on uploading sensitive records to X/Grok, with repeated HIPAA questions and distrust of Elon’s platform handling personal health data.

5

Preference for augmentation, not replacement:

Many endorse a hybrid approach — doctor + AI — as the safest path, seeing AI as a first-pass tool but insisting final decisions belong to trained clinicians.

6

Skepticism and political distrust:

Some replies frame the offering as publicity or data-mining by Elon, accusing Grok of censorship/propaganda and expressing broad distrust of the motives behind it.

7

Calls for validation and regulation:

Users demand clinical validation, oversight, and even frameworks (an “AI constitution” or regulation) before such features are promoted for real medical use.

8

Humor, trolling and disbelief:

Amid the criticism are jokes (bikini uploads, powerball predictions, penis/“peener” queries) and mockery that highlight both amusement and incredulity toward the product.

Top Reactions

Most popular replies, ranked by engagement

M

@MaryBowdenMD

Opposing

Grok misdiagnosed my MRI.

229
39
11.4K
K

@kmacmetax

Opposing

Can I load up my browser history and get a mental diagnosis?

165
11
13.5K
D

@dyatlov75

Opposing

Never change your treatment plan or ignore a doctor’s advice based solely on an AI’s interpretation of an image. If the AI finds something concerning that your doctor didn't mention, bring the AI's "findings" to your medical team for a professional review.

63
7
3.1K
U

@USBornNRaised

Supporting

I happen to agree with Elon on Grok and the MRI images. Not only are they accurate-the results come within seconds. Currently patients can wait up to 3 days for results to be read by a radiologist in a hospital setting.

53
4
4.3K
E

@elliesangelwing

Supporting

I believe it, i enjoy using GROK 4 for info, i wonder how would that work with todays health care and insurance system, all the rules and regulations especially if there is a surgery to be involved with diagnoses. Will the GROK diagnosis be accepted?

36
6
5.2K
D

@DWebbs

Supporting

Grok is a better doc than most docs Coming for soooo many jobs

29
4
6.4K

This article was AI-generated from real-time signals discovered by PureFeed.

PureFeed scans X/Twitter 24/7 and turns the noise into actionable intelligence. Create your own signals and get a personalized feed of what actually matters.

Report an Issue

Found something wrong with this article? Let us know and we'll look into it.