X

AI Programs Are Being Trained To Deal With Human Error

A woman named Anca Dragan is the head of a team at the University of California at Berkely whose sole job is to flesh out possible nightmare scenarios in human-AI relations that could result from humans being vague or unclear, then figure out how to prevent them from happening. To clarify, the department is called the Center for Human-Compatible AI, and its staff do exactly what the department name says they do. Humans make mistakes, and can misrepresent their wishes, or leave out important details, leading to misunderstandings, even among other humans. When it comes to AI, especially with advanced robotics involved, things could potentially go extremely sour in an extreme hurry. That’s where Dragan and her team come in.

The Center for Human-Compatible AI is built around the concept that AI can be taught to deal with human inconsistencies in a few ways. One of the key ways that’s being pursued, according to Dragan herself, is to make the AI examine a task it’s given to figure out the goal of that task, and then figure out whether the action it’s been assigned, and the methods it wants to use to accomplish that task, would further that goal. Piggybacking off of that, Dragan and her team also want AI programs to be able to tell when an objective that they’ve been given may not match up entirely with a human’s wishes, or what may be good for that human and those around them. This will be accomplished by essentially teaching the AI programs to prioritize outcomes, tasks, and values, in much the same way that humans do. The goal is for AI programs to recognize when their task is questionable or their instructions are unclear, and then double check with the human that issued those instructions in order to obtain clarification or perhaps inform the human of a predicted bad outcome and cause a total change of plans.

Popular science fiction and real experts have waxed philosophic over the years about the chaos that rogue or even well-intentioned but misinformed artificial intelligence could cause under just the wrong circumstances, and the current trend of integrating AI and advanced robotics will exacerbate that potential issue. The efforts of Dragan and her team, as well as others like them, seek to prevent exactly the sort of situation that movies like I, Robot and experts like Elon Musk have warned about.