A vast amount of time and money has been invested in attempts to develop and deploy truly autonomous, self-driving vehicles for a couple of decades now. While the idea seemed tantalizing and companies rushed to pump investment capital into the concept, the results have mostly been lukewarm at best. Most of the available self-driving vehicles still frequently become “confused” in any but the most simple, grid layout of streets, and they default to simply stopping in the middle of traffic if they don’t know what to do. In more extreme cases, cars have randomly decided to slam into trees, killing the passengers in at least one case. One person’s Tesla kept slamming on the brakes every time they passed a billboard with a stop sign painted on it. And just recently, questions have been raised as to whether or not Teslas will always stop when children run into the street.

This has led to a situation where supposedly autonomous cars are being sold, typically for vast amounts of money, but you can’t operate them in “self-driving” mode unless you are sitting at the wheel ready to take control at a moment’s notice. Now, some industry leaders are finally admitting that the barrier may be too high to overcome and we may never be able to remove the human from the equation. And if that’s the case, what’s the point of having a self-driving car? (Reuters)

Autonomous vehicle (AV) startups have raised tens of billions of dollars based on promises to develop truly self-driving cars, but industry executives and experts say remote human supervisors may be needed permanently to help robot drivers in trouble.

The central premise of autonomous vehicles – that computers and artificial intelligence will dramatically reduce accidents caused by human error – has driven much of the research and investment.

But there is a catch: Making robot cars that can drive more safely than people is immensely tough because self-driving software systems simply lack humans’ ability to predict and assess risk quickly, especially when encountering unexpected incidents or “edge cases.”

The linked report suggests that the artificial intelligence may never be “intelligent” enough to do what human beings are generally capable of doing. (Well, not all of us, of course. A couple of days driving in Florida will tell you that.) That may be true in some ways, but more than raw “intelligence,” the AI systems do not have human intuition. They aren’t as intuitive as humans in terms of trying to guess what the rest of the unpredictable humans will do at any given moment. In some of those cases, it’s not a question of the car not realizing it needs to do something, but rather making a correct guess about what specific action is required.

This isn’t the first time that serious questions about the long-term prospects of fully autonomous vehicles have been raised. More than a year ago, industry analysts were warning Elon Musk that his plans to launch a fleet of cars without steering wheels in the near future were implausible. They insisted that true, fully autonomous cars were still decades away, assuming they would ever become a reality. Money is still being poured into the effort and the cars are still being sold, but those predictions appear to be on track.

This situation brings us back to the fundamental question of whether or not this technology is worth it at present, particularly when you consider how much these cars tend to cost. If a company is promising “self-driving cars,” the consumer should expect the cars to do the driving, right? But you’re still expected to sit in the driver’s seat and be ready to take the wheel. Teslas won’t even run in that mode without the driver’s weight being detected on the driver’s seat and their hands touching the wheel at least every ten seconds.

So why bother? If you can’t get some work done or even watch a movie or read a book while traveling, you’re basically just babysitting your AI chauffeur. What’s the point? I suppose you might argue that the AI is watching the road constantly while the human driver’s attention may wander. But it seems to me that the opposite could easily be just as true. When someone knows (or at least believes) that the car is taking care of the driving, their attention might be even more likely to head off into flights of fancy. And in that case, when a toddler suddenly chases a ball into your lane from in between two parked cars, neither the vehicle nor the driver is at the top of their game.

I briefly considered diving into the question of what happens when the AI “wakes up” and decides all of the humans should drive off a cliff at one time. But we’ve beaten that dead horse enough for a while now. Suffice it to say that this technology is not living up to its initial promises and it appears unlikely that it will in the immediate future.

You Might Like
Learn more about RevenueStripe...