This story was originally published in our Nov/Dec 2022 issue as "Your Robotaxi Is Almost Here." Click here to subscribe to read more stories like this one.
If you need a ride in suburban Phoenix, you can hail a driverless Waymo One minivan on an app. “Good morning,” says a cheerily robotic female voice, greeting you by name as you board. “This car is all yours, with no one up front.” To start the ride, just push the blue button on the rear display screen. Then, off you go, with a ghost behind the wheel.
Waymo, a subsidiary of Google, has been running this rider-only taxi service since late 2020. Other robotaxi fleets are rolling out in various U.S. cities, such as the launch of the General Motors-backed Cruise this summer in San Francisco. And in parts of the Sun Belt, autonomous trucks in test mode are already hauling freight for FedEx and other companies — with human drivers tagging along as backups, ready to take over if necessary. Within a year or two, these trucks could be rolling down interstates with no one in the cab. As for autonomous vehicles, or AVs, for personal use, General Motors says it might deliver the first one “as soon as mid-decade.”
Such breakthroughs suggest reality could finally be starting to catch up to the hype about AVs. Five to seven years ago, tech firms and car manufacturers were predicting self-driving technology would be widely deployed by 2020. But developers admit the challenge has proven much more difficult — and expensive — than anticipated. Despite advances in artificial intelligence (AI), computing and perception, it could be years, if not decades, before fully self-driving cars are ubiquitous.
What’s increasingly clear is that the driverless revolution is going to play out slowly, mile by mile, one neighborhood at a time. For most of us, autonomous driving will be possible only under limited circumstances — constrained to particular roads or highways at certain times of day, and only under favorable traffic and weather conditions.
“This is not a ‘we wake up some morning and people don’t own cars anymore, and just take robotaxis,’ ” says Jesse Levinson, co-founder of Zoox, an Amazon-owned developer of self-driving taxis. “This is going to be a relatively measured transition for society. And I think that’s normal and that’s healthy for something this disruptive and this complicated.”
Everyday driving has been getting more automated for decades, starting with cruise control in the late 1950s and early 1960s. Today, cars have features that can maintain a desired distance from the vehicle ahead, keep centered in their lane and brake in an emergency. Some systems even let drivers rest their hands in their laps on long stretches of highway if they’re paying attention.
Technically, though, none of these elements are self-driving systems. Instead, they are deemed driver-support features under the classification system established by SAE International (formerly known as the Society of Automotive Engineers). The system defines six levels of automation from 0 (no automation) to 5 (fully automated). None of the personal vehicles on U.S. roadways today exceed level 2, and only a couple level 3 models have gained approval in other countries.
At levels 0 through 2, the driver remains in control of the vehicle, even if their feet are off the pedals and they are not steering. Cadillac’s Super Cruise level 2 system, for example, offers hands-free driving on the highway, but an infrared camera monitors the driver’s behavior. If the system determines you aren’t being attentive, it will require you take back control, or bring the car to a safe stop on its own.
Soon, drivers won’t always have to pay attention. The first level 3 features, coming to U.S. vehicles in the next year or two, will allow people to relax, read a book or check their email in slower traffic. At level 3 and above, the car — not the human — is responsible for controlling the vehicle. Level 3 systems could be especially tricky, though; when activated, they are capable of unsupervised driving under very limited conditions, but will hand back control to the human driver when necessary. A good example is a “traffic-jam chauffeur,” which would give drivers a mental break during relatively slow stop-and-go traffic. “[Cars] have the hardware that they need to do the job, and now [engineers] have to develop software and prove it is safe enough all of the time,” says John Zinn, director of autonomous solutions at Ansys, a company that helps manufacturers validate their technology.
Many companies are working on level 3 systems. Honda has released a vehicle equipped with this technology in Japan. Volvo has also begun testing its level 3 Ride Pilot feature in Sweden, with plans to hit California roads this year. And Mercedes-Benz began selling its Drive Pilot system this summer in Germany.
While legacy automakers are building toward autonomy by gradually scaling up driver-assistance features, many tech companies are leaping straight to level 4 systems for applications like robotaxis, delivery vehicles and heavy trucks. Such systems don’t require any driver input, but also only work in specific sets of conditions that they’re designed to handle — often restricted to particular hours of the day, ideal weather or approved neighborhoods. Level 5 automation means the car is smart enough to drive anywhere, in all conditions. So far, no such technologies exist.
Trained and Tested
Riding in the back seat of a rider-only Waymo One minivan is both awe-inspiring and uneventful, as one would hope. The catch: Its vehicles are “geofenced,” meaning they operate within a specific area where they have been thoroughly trained and tested, starting with the East Valley of Phoenix. Waymo developers targeted specific areas where the weather is typically nice, the streets are wide and there aren’t a lot of complicated intersections. This summer, the company also started test trips in parts of downtown Phoenix and San Francisco.
These new settings will vet the potential for AVs in densely populated cities. Driving in downtown San Francisco, for example, is far more complex, for both humans and robots. The sheer volume of drivers combined with pedestrians, cyclists, cable cars and convoluted intersections adds unpredictability, and with it, potential for error. Waymo’s model involves taking baby steps for these new fleets, starting with employees and a few hundred “trusted testers” before expanding its passenger service to the public.
AI is adept at handling routine, repetitive tasks, but many studies show the public is not ready to trust an algorithm making decisions in the front seat. If your smartphone or laptop fails, it’s an inconvenience. If a self-driving car fails, lives could be at stake. One comprehensive review paper published in AI and Ethics in 2021 identified AV safety as the paramount factor for public acceptance. The work compiled multiple surveys over the past decade showing the majority of people were “highly concerned” about the safety of AV systems and doubted they can perform better than a human driver. Meanwhile, an annual AAA survey in 2021 found that 54 percent of Americans are afraid to ride in a self-driving car, and another 32 percent are unsure about it.
At present, there are no federal regulations in the U.S. for self-driving cars to help build public trust. That means it’s up to carmakers to self-certify that their vehicles meet Federal Motor Vehicle Safety Standards and to prove that they are free from unreasonable risks. Earlier this year, U.S. safety regulators removed one hurdle to robotaxi deployment by acknowledging that such vehicles shouldn’t be required to have manual controls — like steering wheels and pedals — in order to meet crash safety standards.
But proving that the risk of deploying AVs is acceptable is a taller hurdle. “It’s an enormously difficult challenge to decide how safe is safe enough,” says Sam Abuelsamid, who tracks AV development at Guidehouse Insights. “Is this system good enough to deploy without a safety operator? There are no regulatory standards for that anywhere. So everybody’s trying to figure this out.”
Companies are currently striving to build an evidence-based argument, or “safety case,” for deployment, which entails explaining the design of their technology and how they validate its safety, through voluntary reports submitted to the National Highway Traffic Safety Administration (NHTSA).
In 2016, the public policy research corporation RAND produced a study that mathematically calculated what it would take to prove whether an AV matches the safety of a human driver — where the risk is about 1.09 fatalities per 100 million miles driven. They determined that such proof would take 8.8 billion miles and 400 years, with a fleet of 100 vehicles being test-driven 24 hours a day.
Since that’s essentially impossible, AV developers rely heavily on simulations. They run their test cars over the same roads, again and again, and document the incidents to inform those simulations. The real-road data can be analyzed and manipulated via simulation to continually improve their systems.
The autonomous tech developer Argo AI aims to build its safety case by deploying its test vehicles (with safety operators) on Lyft’s ride-hailing network. Then they will compare their performance to the rate of collision incidents reported within the same geofenced area. “We’ll roll out street-by-street, block-by-block,” says Argo AI CEO Bryan Salesky, whose company is backed by Ford Motor Co. and Volkswagen. But he adds, “the business only works if you can scale it across many cities at once.” Other robotaxi developers are also inching closer to deployment — with limitations, such as permits restricted to late-night and early-morning operation.
Autonomous driving at night is generally easier, explains Abuelsamid. “Prediction is the single hardest part of that AV software stack,” he says. At night, cars are parked, and the streets are quiet. “There are no pedestrians, maybe a few raccoons. That’s it. If you run over a raccoon, nobody cares.” AV test fleets could also be commercialized soon in a handful of other fair-weather cities, like Miami, Las Vegas and Austin, Texas. For now, though, they still have backup drivers and are limited to specific areas.
Autonomous trucks, in fact, will likely beat robotaxis to market, mainly because the engineering task is easier and the economics are more favorable. Big rigs can stick to highways, on well-rehearsed routes between depots, for example, without worrying about pedestrians or cyclists. Amid a shortage of long-haul drivers, self-driving trucks can’t arrive fast enough.
At least a half-dozen companies are working on them, including Waymo, TuSimple, Embark and Aurora. Most are already making autonomous runs (with safety drivers) on highways in Arizona and Texas while they continue testing their systems. The long-term vision is a network of transfer hubs, where cargo trailers would be handed off between human and robot drivers. Autonomous trucks would navigate the highway between the hubs. Humans in conventional trucks could then take over on local streets to the final destination.
As the technology advances across all types of vehicles, so does public confusion about autonomous vehicles — fueled partially by misinformation. Safety advocates, including Consumer Reports and AAA, have criticized carmakers for using terms like autopilot on their assisted-driving features. Such marketing labels risk lives because drivers put too much confidence in the technology and become complacent behind the wheel, they say.
“You better be real clear with the customer what the promise is you’re giving them,” says Salesky. “This is where the industry has some work to do.”
Beyond that, safety advocates and the National Transportation Safety Board have been especially critical of Tesla for using its customers — and anyone sharing the road with them — as guinea pigs for its improperly labeled “full self-driving” beta feature. (It’s only a level 2 system under SAE’s nomenclature.) The NHTSA is investigating a series of deadly accidents involving Tesla vehicles and has stepped up its scrutiny of assisted driving technologies in general.
Crashes, however rare, undermine confidence in vehicle automation. In March 2018, an Uber test vehicle struck and killed a pedestrian near Phoenix, the first known fatality involving an AV. This summer, a federal investigation revealed that Tesla’s Autopilot feature was involved in more than 250 crashes in roughly a one-year period, including several fatalities.
These types of high-profile crashes can swing public sentiment, according to a December 2021 study in IATSS Research that mined Twitter data. The authors analyzed 1.7 million tweets about AV technology before and after key accidents and found a significant spike in negative tweets after the crashes were publicized.
While roughly 40,000 Americans die every year in traffic accidents, just one highly publicized crash in a self-driving car can reverberate and turn people against them. That’s why the biggest hurdle to self-driving cars might be earning the public’s trust.