Googles’s self-driving cars aren’t perfect, but they’re still better than human drivers. The company has revealed that its human drivers had to take control of its robotic vehicles 341 times win the last 14 months, after a total of 424,999 miles (468 682,361km) of driving.
As part of the agreement to test self-driving cars in California each company involved — so far Bosch, Delphi, Google, Nissan, Mercedes-Benz, Tesla, and Volkswagen Group — were asked to submit “disengagement” incidents, where a human has had to take control of the car. Disengagements can happen when a human takes control of the car, or the car tells the human it is switching to manual mode.
The cars California experienced 272 failures . If humans hadn’t taken over driving, they would have crashed 13 times, the company has said.
So no, self-driving cars aren’t perfect yet. But the amount of times humans are needed is decreasing. The available reports show Google had the most disengagements with 341, although it drove the most miles, Nissan’s autonomous vehicles drove 1,485 miles and disengaged 106 times, Delphi drove 16,662 miles with 405 human takeovers and Tesla didn’t report any, according to Ars Technica.
However, Google said it is only reporting the ‘significant’ times when humans were forced to take over as the reality is there are “many thousands” of occasions” where control is taken.
“Safety is our highest priority and Google test drivers are trained to take manual control in a multitude of situations, not only when safe operation “requires” that they do so,” the company said in its report of September 2014 to November 2015.
The majority of the disengagements from Google came after the car detected a technology problem within itself. These ranged from sensors malfunctioning to software problems. On 23 different occasions humans had to take over because of “a recklessly behaving road user” and eight times they had to “disengage for incorrect behaviour prediction of other traffic participants”
“Disengagements are a critical part of the testing process that allows our engineers to expand the software’s capabilities and identify areas of improvement,” Google said.
“Our objective is not to minimise disengagements; rather, it is to gather, while operating safely, as much data as possible to enable us to improve our self-driving system.”
The performance of the cars demonstrates that we’re not quite at the point where motor vehicle owners can watch Netflix and read a newspaper on the way to work. In September Google’s self-driving cars were foiled by a ‘laser’ being shone at the cars, which confused its lidar sensors. The cars are also not infallible to overzealous law enforcers.
As California works out the detail of its self-driving car regulations — Google said it is “disappointed” by the plans announced so far — it will have to consider how much of a human override is needed for autonomous vehicles. And, for the time being at least, disengagements (and human beings) will be required.