Today Tesla FSD is ~9x safer than humans Soon, FSD will be 1000x safer and driving manually will be considered dangerous Every Tesla on the road feeds real-world data back to train the AI. Billions of miles. Every edge case. Every near-miss. No human driver can learn that fast The fleet is the teacher and it never sleeps
This RAND infographic (PDF) visualizes how many real-world miles are required to demonstrate AV safety improvements (e.g., ~5 billion miles to show a 20% reduction in fatalities), illustrating why billions of fleet miles and continuous data collection are necessary to train and validate FSD — directly supporting the tweet’s point about the fleet as the teacher and the scale needed to prove large safety gains.
Source: RAND Corporation
Research Brief
What our analysis found
Tesla's Q3 2025 Vehicle Safety Report claims that vehicles using Full Self-Driving (Supervised) recorded 1 crash for every 6.36 million miles driven, compared to an estimated U.S. national average of 1 crash approximately every 702,000 miles — a ratio Tesla and its supporters frame as roughly 9 times safer than human drivers. The data is drawn from approximately 2.5 billion telemetry packages received from the global fleet (excluding China) in that quarter alone, with collisions defined by a Delta-V threshold and airbag deployment criteria under federal regulation 49 C.F.R. §563.5. Tesla attributes a crash to FSD if the system was active at any point within five seconds before the collision — a narrower window than NHTSA's own 30-second reporting standard.
The tweet's claim that FSD will soon be "1,000x safer" and that manual driving will be considered dangerous is speculative and not supported by any published data or independent projection. While Tesla's quarterly reports do show an improving trend in miles-per-crash metrics over recent years, the leap from a reported 9x advantage to a three-order-of-magnitude improvement remains aspirational. Critically, independent safety experts, the GAO, and NHTSA itself have raised significant concerns about the methodology behind Tesla's comparisons, and a federal investigation covering roughly 2.9 million vehicles was opened in October 2025 after reports of red-light running, wrong-way driving, and crash injuries involving FSD.
The narrative that Tesla's fleet functions as an unparalleled AI training resource — learning from billions of miles and every edge case — is directionally accurate in describing how the company collects driving data at scale. However, the implication that this data pipeline alone guarantees exponential safety gains oversimplifies the engineering challenges of autonomous driving and ignores the gap between raw data collection and verified, real-world safety outcomes as measured by independent bodies.
Fact Check
Evidence from both sides
Supporting Evidence
Tesla's own Q3 2025 safety data underpins the ~9x claim
Tesla's Vehicle Safety Report states FSD (Supervised) vehicles experienced 1 crash per 6.36 million miles versus an estimated national average of 1 crash per 702,000 miles — a ratio of approximately 9 to 1, which is the direct source of the tweet's headline figure.
Insurer pricing decisions reflect lower perceived risk
Lemonade announced a product in January 2026 that cuts per-mile insurance charges for Tesla FSD-engaged miles by approximately 50%, citing data that FSD reduces accidents. This commercial decision by an independent insurer represents a real-world financial bet that FSD miles carry materially lower claim risk.
Massive telemetry scale supports fleet-learning narrative
Tesla reported receiving roughly 2.5 billion telemetry packages from its fleet in Q3 2025 alone, and the company has accumulated billions of real-world miles across its FSD and Autopilot programs since 2020. This scale of data collection is unmatched in the industry and supports the claim that the fleet serves as a vast training resource.
Quarterly trend data shows improvement over time
Multiple EV press outlets and third-party data watchers have tracked Tesla's quarterly safety reports and noted year-over-year improvements in miles-per-crash metrics, consistent with the argument that continuous fleet data ingestion is helping refine and improve FSD performance.
Contradicting Evidence
Apples-to-oranges comparison methodology
Tesla compares crashes detected via its own vehicle telemetry (using specific Delta-V and airbag thresholds) against a U.S. national baseline derived from FHWA vehicle-miles traveled and NHTSA sampling systems like CRSS and CISS. Experts note these use fundamentally different reporting methods — telemetry versus police reports — with different capture thresholds, making a direct 9x comparison potentially misleading.
Narrow 5-second attribution window may undercount FSD-related crashes
Tesla only attributes a collision to FSD if the system was active within 5 seconds of the event, which is significantly shorter than NHTSA's 30-second Standing General Order reporting window. This means crashes where earlier FSD behavior contributed — such as poor positioning or a delayed handoff to the human driver — could be excluded from the FSD tally.
NHTSA opened a federal probe into FSD safety in October 2025
The investigation (PE
covers approximately 2.9 million vehicles and was triggered by dozens of reported incidents including red-light running, wrong-way driving, and crashes resulting in injuries
An active federal safety probe directly challenges the narrative that FSD is unambiguously safer.
The 1,000x safer prediction has no empirical or independent basis
No published study, regulatory body, or independent research organization has projected that any autonomous driving system will achieve a safety advantage of 1,000 times over human drivers. This claim is entirely speculative and not grounded in available data.
Driving-condition bias skews the comparison
FSD-engaged miles are disproportionately driven on highways and in favorable weather and lighting conditions, while the national crash average includes all road types, rural roads, nighttime driving, and adverse weather. The GAO has cautioned against such comparisons without normalization for driving environment and driver demographics.
Report an Issue
Found something wrong with this article? Let us know and we'll look into it.