Depending on whether you ask a technological optimist or pessimist, we might see early versions of consumer-ready self-driving vehicles in the next few years, or not for another couple of decades. But no matter where you fall on that spectrum, it’s likely that the first generation of autonomous driving AI will be ferrying people around within your lifetime.
As an AI enthusiast, you’re likely frightened or excited by this thought. The techie consumer in you is likely thrilled by the idea of commuting to work hands-free thanks to your robotic autopilot, or taking a nap on a long road trip. But the skeptical coder in you might be worried that simple regression and pattern recognition algorithms may not be enough to keep you truly safe.
Plus, there’s the universal pattern of software (and tech) development that usually unfolds; the first generation of a given tech product is usually terrible, due to rushed schedules or an inability to foresee future issues.
So is it smart to trust first-generation self-driving vehicles?
First, you might consider the economics of your decision. A first-generation self-driving car is going to be far more expensive than later generations of the same technology, and probably far more expensive than a manually driven alternative. In just a few years after the initial release, you’ll probably be able to find a much better deal on a used self-driving car on a marketplace like Swap Motors. For that reason alone, it may be a better idea to wait for subsequent generations of autonomous vehicles.
We also have to consider the competitive rush most companies are in. Consider the internal memos leaked from within Uber, where former Uber executive Anthony Levandowski is quoted as saying, “we need to think through the strategy to take all the shortcuts we can find,” and “I just see this as a race and we need to win, second place is the first looser [sic].”
A few months after those documents were released, an Uber self-driving car killed a pedestrian—the first fatal accident attributable to an autonomous vehicle. If companies are so hell-bent on being the first to get to market, they’re likely to cut corners and neglect the QA testing that all AI needs to be consistently successful.
Laws and Regulations
Thankfully, there are safeguards in place. Autonomous vehicle laws vary by state, but currently, no fully autonomous vehicles are allowed on American roadways. Most states allow for some kind of limited self-driving car features, or self-driving car testing, but lawmakers are cautious not to expose consumers to any more risks than necessary. Should this attitude continue, it may be enough to counteract executives’ push to get cars to market as quickly as possible; automakers will have to prove beyond a shadow of a doubt that their AI is capable of safely transporting passengers.
When First-Generation Is Second-Generation
By the time we get a fully functional, consumer-ready self-driving car, developers will have already had many years to perfect their algorithms and test them out in live environments. Consider the fact that Waymo has been testing its vehicle since 2009, and in that span of time its fleet has driven more than 7 million miles—a length that would take an average driver 300 years to finish. In addition to that, Waymo is testing its cars virtually, with more than 2.7 billion virtual test miles in 2017 alone.
Taking this into consideration, the “first” generation you have access to could be more appropriately described as the second generation of autonomous vehicle.
We should also consider that even a suboptimal AI algorithm will probably be safer and more efficient than a comparable human driver. For example, there are more than 40,000 vehicular fatalities every year, and more than 90 percent of all traffic accidents are attributable to human error. The National Highway Traffic Safety Administration (NHTSA) would prefer to wait until autonomous vehicles are twice as safe as human drivers until they’re fully allowed to drive on public streets. But even if they’re only 10 percent better than the average human driver, they could save 4,000 lives a year.
Without intricate knowledge of the code responsible for piloting self-driving cars, you’ll have to use your assumptions and baseline judgments to decide whether or not to purchase a self-driving vehicle. There are certainly risks, compounded by the desperate eagerness of corporations to get autonomous vehicles on the road as fast as possible, but all it would take is a marginal increase in efficiency and safety to justify the jump. Keep learning and watching for new developments, and try to keep your overly optimistic and overly skeptical sides in balance before making the final call.