X

Tech Talk: Self-Driving Cars Will Need Some Human Input

Although it isn’t currently the case, eventually artificial intelligence will be utterly perfect. Bots that can’t pass a Turing Test will be kids’ toys, nobody will have to work, and AI programs will be more than knowledgeable enough to program each other. For now, however, even as advanced as they are, AIs need human input. While Microsoft found out the hard way that even something as trivial as a chat bot can go horribly wrong if left to its own devices, of all the applications of deep learning and machine learning out there that may need human input the most, self-driving cars rank pretty high, if not the highest.

The way a self-driving system works right now, a lot of decisions and parameters produced by the AI have to be run by humans to ensure perfection. One example is the collision boxes for other cars. If you’ve ever played a video game where your car hit a wall from yards away or your sword sheared harmlessly through an enemy’s arm or the top of their head, you’re familiar with the concept of faulty bounding boxes. It should go without saying that these are extremely important in self-driving cars and things can get very messy very quickly if they’re not perfect. With humans on hand to inspect bounding boxes whose “confidence” ratings were low or whose use resulted in a close call or collision, self-driving AIs can eventually learn how to create a perfect bounding box by comparing the human-corrected data to their own data. Naturally, that’s far from the only application that still requires a human’s warm touch.

Things like dealing with heavy rain, figuring out the difference between a fixed-gear bike and a moving bike and what to do when faced with a jaywalker are all matters that self-driving cars have had to consult with their human creators for an answer to, and there are tons of other such quandaries. While the list may seem endless, of course, it isn’t. Eventually, self-driving cars will be able to judge, make decisions, follow rules, make tough calls and learn new things all on their own. That day is still quite far away, with self-driving AI still making stupid mistakes like assuming a bus can stop while they swerve to avoid a sandbag. While almost none of Google’s self-driving cars’ accidents have been at fault, with only that one case as a possibility, that’s nowhere near enough to consider them safe; they will need to essentially eliminate not only crashes, but close calls, misjudgments, bad or incomplete data and, of course, they’ll need to learn to prioritize, even unto the point of deciding, in an inevitable accident scenario, if they should risk harm to their driver to spare somebody else.

Self-driving cars are certainly not the only branch of the AI world that will require various levels of human input for a long period into the future, but they are more than likely the branch that will require the most input, simply because one tiny mistake on the road can jeopardize the safety of the passenger and any nearby motorists and pedestrians. While Google’s self-driving cars already get around better than the average human, a self-driving vehicle finding itself in a situation it knows nothing about while in a very rural area or flying down the highway would not be a fun experience for anybody involved. So far, nobody can really estimate exactly when self-driving cars will reach such a level of perfection that they won’t need outside help, or when AI will become advanced enough to provide that outside help in a human’s place, but if you’re reading this, it’s highly likely that we will see both of those landmarks happen within your lifetime.