X

AH Tech Talk: Google Discusses Self Driving Car Safety

On Monday, Google revealed that its self driving cars had been involved in eleven minor incidents, or “fender benders.” On the face of it, Google’s announcement is that these incidents were not caused by the self driving cars but by conventional, human driven cars. To be specific, humans have driven into the Google cars. These eleven incidents occurred in California over the last six years in total, although four have occurred in the last nine months. Chris Urmson, director of Google’s self-driving car program and a Google spokeswoman, wrote in a blog post that the self-driving cars have been side swiped a couple of times, rear ended seven times (mostly at traffic lights, but also on the freeway) and also by a car that drove through a stop sign. None of these incidents caused any injuries.

Google goes on to state that because statistics for minor injuries are not accurately recorded because many people simply do not report these incidents to the police. These are the most common sorts of incidents – light damage with no injuries. However, these statistics have to be reported by Google because as of the 16 September, auto manufacturers testing autonomous cars must report testing to the DMV. Crashes must also be recorded, too. Before September there were no testing regulations in place. Urmon continued, “We have a detailed review process and try to learn something from each incident, even if it hasn’t been our fault.” California law restricts the DMV from releasing information about where and what happened in these incident, but an Associated Press article claims that two of the recent incidents occurred with the car in self driving mode and the other two, when a human was driving the car.

Each self-driving car has GPS tracking, radar and of course the self driving software that allows the vehicle to recognize street signs, signals and other road furniture. The software is in a constant state of evolution and improvement: Google has identified many driver behaviors including land drifting and running red lights, which are leading causes of “significant collisions.” It has programmed the car to adapt to these behaviors. These behavioral changes include having the car pause after a light turns green before moving through an intersection to avoid red light runners. The self driving car also decelerates if another vehicle is emerging into a predetermined buffer zone, which for a human driver might be called the safety zone.

Jake Fisher, director of automotive testing at Consumer Reports, has speculated if these incidents could have been prevented if human drivers were in control. The rationale behind this line of thinking is because human drivers have greater anticipation. Human drivers can assess for social cues from other human drivers – for example, they can see if other drivers have their eyes on the road or if they’re gesturing. Google’s software has further to go before it can reach the level of sophistication that a good quality human driver can reach. However, whilst Jake has picked holes in Google’s self driving car being too reactive and not adaptive enough, perhaps the wide issue is that a mix of human and self-driving cars is the issue. Human drivers are unpredictable, rash, prone to errors of judgement and operating their cell ‘phone whilst driving. Self driving cars follow a predetermined set of rules. Perhaps the lawmakers are thinking about self driving cars the wrong way around. Perhaps once the software is up to speed, human drivers should be banned?