Self-driving cars were supposed to make human error obsolete.
No more distracted driving. No more drunk drivers. No more fatigue-related crashes at 2 a.m. on an empty highway.
But as autonomous vehicles expand across the United States from Arizona and California to Texas and Nevada, one question keeps resurfacing: Are autonomous cars actually safer, or are we just shifting the risk?
What the Data Shows So Far
Human error contributes to the overwhelming majority of traffic crashes in the U.S., according to federal safety agencies. Speeding, impairment, distraction, and failure to yield remain leading causes of serious collisions.
Autonomous vehicle technology aims to eliminate those variables.
Advanced driver-assistance systems (ADAS) and fully autonomous systems rely on:
- Radar
- LiDAR
- Cameras
- AI-based decision-making software
- Real-time mapping and object detection
Some early reports suggest that autonomous systems really reduce certain types of crashes.
Are Autonomous Cars Statistically Safer?
It depends on how safety is measured. Some autonomous vehicle developers have released data suggesting their systems crash less often than human drivers, at least within specific conditions and mileage tracked.
For example:
- Waymo’s fully autonomous vehicles logged over 7.14 million miles of driverless operation with far fewer crash rates than comparable human benchmarks. In one independent analysis, collision rates for autonomous systems were roughly 0.6 incidents per million miles versus 2.8 incidents per million miles for human drivers, indicating about an 80 % lower crash frequency for the autonomous fleet.
- Peer-reviewed research over 56.7 million autonomous miles found statistically significant reductions in several types of crashes, including up to 96 % fewer intersection crashes compared to human drivers in similar conditions.
- In Austin, data from Waymo’s robotaxi operations showed 81 % fewer injury-causing crashes and 94 % fewer airbag deployments than human drivers in the same city during the same period.
Meanwhile, broader national reporting shows that autonomous test vehicles, including both autonomous driving systems (ADS) and advanced driver-assistance systems (ADAS), were involved in about 132 collisions per million vehicle miles traveled in 2023, according to mandatory AV collision reports. However, that figure includes all reported incidents and isn’t adjusted for deployment conditions or service types.
But those comparisons don’t always tell the full story. Many autonomous vehicles operate in limited, geo-fenced areas with mapped roads and favorable weather, unlike the wide range of conditions human drivers face every day. They also log far fewer total miles, meaning the data sets are smaller and less varied. On top of that, reporting rules for autonomous crashes can differ; even minor contact that might go unreported in a typical human-driver incident may be formally logged in autonomous testing programs.
In short, some autonomous systems show lower crash rates in the environments where they operate, but true apples-to-apples comparisons remain difficult because deployment conditions, mileage exposure, and reporting standards are not the same.
The Incidents Raising Concerns
Despite technological advances, autonomous vehicles have been involved in high-profile crashes, including fatal pedestrian accidents and multi-car collisions.
Federal investigations into some of these crashes revealed issues such as:
- Failure to properly detect pedestrians at night
- Inconsistent response to unusual road conditions
- Software limitations in complex urban environments
- Overreliance by human “safety drivers.”
Autonomous systems don’t get tired. But they also don’t “understand” the road the way humans do. They operate based on programming, prediction models, and sensor interpretation.
And when something unexpected happens, such as construction zones, debris, emergency vehicles, erratic drivers,s the system may not always respond perfectly.
Two incidents in particular illustrate the complexity of the issue.
1. The First Fatal Collision Involving a Truly Driverless Vehicle
In January 2025, San Francisco saw what’s widely reported as the first fatal multi-vehicle crash involving a fully autonomous (no driver) robotaxi. A high-speed vehicle struck several stopped cars, including a Waymo robotaxi at a red light, killing one person in another vehicle and injuring others. Although the autonomous system in the Waymo wasn’t determined to be at fault, the incident raised serious questions about how driverless technology interacts with unpredictable traffic and the broader legal implications when someone dies in a crash involving an autonomous vehicle on public roads.
2. Pedestrian Fatality During Uber’s Early AV Testing in Arizona
On March 18, 2018, Elaine Herzberg was struck and killed by a self-driving Uber test vehicle in Tempe, Arizona, the first pedestrian fatality involving a vehicle operating in autonomous mode with a human safety driver present. Herzberg was walking her bicycle across the street when the vehicle, in autonomous mode, failed to stop in time. This case became a landmark incident in autonomous vehicle safety discussions, highlighting limitations in object detection and emergency braking systems at the time.
These cases highlight a central issue. The debate is no longer just about whether artificial intelligence can make mistakes. The more pressing question is what happens legally, technologically, and ethically when it does.
The Liability Question: Who Gets Sued?
When a human driver causes a crash, fault analysis typically focuses on negligence, such as speeding, distracted driving, or failure to yield.
Autonomous vehicle crashes complicate that framework.
Potentially responsible parties may include:
- The vehicle manufacturer
- The software developer
- A fleet operator
- A human safety driver
- A maintenance contractor
In some states, including Arizona and California, autonomous vehicle testing has expanded rapidly. But liability laws have not evolved at the same speed as the technology.
If a software system misjudges a pedestrian crossing, is that driver error? Product defect? Programming flaw? Sensor failure?
Determining fault in self-driving car incidents often requires deep technical investigation, data downloads from the vehicle, and expert analysis.
And one reality remains constant: corporations and insurers move quickly after high-profile crashes to control the narrative and limit exposure.
For injured victims, these cases are rarely simple.
Regulation Is Still Catching Up
The federal government has issued guidance for automated driving systems, but comprehensive nationwide legislation remains limited.
States regulate deployment differently. Some require permits and oversight. Others have taken a more hands-off approach to encourage innovation.
That patchwork approach creates inconsistencies in:
- Reporting requirements
- Insurance standards
- Testing transparency
- Public disclosure of crash data
As autonomous vehicles become more common, pressure is growing for clearer regulatory standards.
So Are They Safer?
Autonomous vehicles may reduce crashes tied to distraction, impairment, and fatigue, and that’s significant. But they also introduce new variables, from software reliability and data accuracy to system updates, human overconfidence, and cybersecurity risks. While the technology is improving, it has not yet proven clearly safer in every driving environment. For now, autonomous vehicles reduce certain traditional dangers while raising new legal and technological questions. Until those are fully resolved, America’s roads remain shared by humans and machines alike.

Add Comment