Consumer Watchdog today called on the National Highway Traffic Safety Administration to require a steering wheel, brake and accelerator so a human driver can take control of a self-driving robot car when safety demands it in the guidelines it is developing on automated vehicle technology.
To dramatize the point, Consumer Watchdog’s John M. Simpson gave Chris Urmson, Chief Technological Officer of Google’s self-driving car project a steering wheel. Simpson and a Google representative spoke at a NHTSA public meeting today about automated vehicle technology. Simpson, the nonprofit, nonpartisan public interest group’s Privacy Project Director, also gave them ten questions about Google’s self-driving project.
“Deploying a vehicle today without a steering wheel, brake, accelerator and a human driver capable of intervening when something goes wrong is not merely foolhardy. It is dangerous,” said Simpson.
Google wants a self-driving robot car without a steering wheel or brake with no way for a human driver to take control.
NHTSA’s meeting came the day after the announcement of a new lobbying group including Google, Lyft, Uber, Ford and Volvo called the Self-Driving Coalition for Safer Streets.
“If these manufactures genuinely cared about Safer Streets, rather than pushing self-serving laws and regulations they would be transparent about what they’re doing on our public roads,” said Simpson. “When something goes wrong, the technical details should be released to the public. It’s not happening.”
He noted that a Google robot car crashed into a bus on Valentine’s Day. Video recorded on the bus by the transit company was released to the public. Google says it has no plans to release its video or technical data.
Read Simpson’s comments to NHTSA here: http://www.consumerwatchdog.org/resources/nhtsatestimony042716.pdf
The need to require a driver behind the wheel is obvious after a review of the results from seven companies that have been testing self-driving cars in California since September 2014, Consumer Watchdog said.
Under California’s self-driving car testing requirements, these companies were required to file “disengagement reports” explaining when a test driver had to take control. The reports show that the cars are not always capable of “seeing” pedestrians and cyclists, traffic lights, low-hanging branches, or the proximity of parked cars, suggesting too great a risk of serious accidents involving pedestrians and other cars. The cars also are not capable of reacting to reckless behavior of others on the road quickly enough to avoid the consequences, the reports showed.
“Google, which logged 424,331 ‘self-driving’ miles over the 15-month reporting period, said a human driver took over 341 times, an average of 22.7 times a month,” Simpson said. “The robot car technology failed 272 times and ceded control to the human driver; the driver felt compelled to intervene and take control 69 times.”
“What the disengagement reports show is that there are many everyday routine traffic situations with which the self-driving robot cars simply can’t cope,” said Simpson. “It’s imperative that a human be behind the wheel capable of taking control when necessary. Self-driving robot cars simply aren’t ready to safely manage too many routine traffic situations without human intervention.”
Questions for Google
1. We understand the self-driving car cannot currently handle many common occurrences on the road, including heavy rain or snow, hand signals from a traffic cop, or gestures to communicate from other drivers. Will Google publish a complete list of real-life situations the cars cannot yet understand, and how you intend to deal with them?
2. What does Google envision happening if the computer “driver” suddenly goes offline with a passenger in the car, if the car has no steering wheel or pedals and the passenger cannot steer or stop the vehicle?
3. Your programmers will literally make life and death decisions as they write the vehicles’ algorithms. Will Google agree to publish its software algorithms, including how the company’s “artificial car intelligence” will be programmed to decide what happens in the event of a potential collision? For instance, will your robot car prioritize the safety of the occupants of the vehicle or pedestrians it encounters?
4. Will Google publish all video from the car and technical data such as radar and lidar reports associated with accidents or other anomalous situations? If not, why not?
5. Will Google publish all data in its possession that discusses, or makes projections concerning, the safety of driverless vehicles?
6. Do you expect one of your robot cars to be involved in a fatal crash? If your robot car causes the crash, how would you be held accountable?
7. How will Google prove that self-driving cars are safer than today’s vehicles?
8. Will Google agree not to store, market, sell, or transfer the data gathered by the self-driving car, or utilize it for any purpose other than navigating the vehicle?
9. NHTSA’s performance standards are actually designed to promote new life-saving technology. Why is Google trying to circumvent them? Will Google provide all data in its possession concerning the length of time required to comply with the current NHTSA safety process?
10. Does Google have the technology to prevent malicious hackers from seizing control of a driverless vehicle or any of its systems?
Simpson’s comments to NHTSA concluded:
“NHTSA officials have repeatedly said safety is the agency’s top priority. You must not allow your judgment to by swayed by rosy, self-serving statements from companies like Google about the capabilities of their self-driving robot cars. NHTSA has said that autonomous vehicle technology is an area of rapid change that requires you to remain ‘flexible and adaptable.’ Please ensure that flexibility does not cause you to lose sight of the need to put safety first. Innovation will thrive hand-in-hand with thoughtful, deliberate regulation. Your guidance for the states on autonomous vehicles must continue to require a human driver who can intervene with a steering wheel, brake and accelerator when necessary.”
Question 11: Will Google and the entire coalition members install sensors inside autonomous vehicles to detect hazardous materials and once detect to disable certain autonomous features? If not, do they have their speech prepared to explain why they didn’t install sensors after an incident happens and one of their autonomous vehicles is used as a weapon of destruction.
Question 12: Does this make the autonomous vehicle industry negligence liable if something does happen, especially since they are aware of the potential of an incident?
The coaltion has so much money they think they can whatever they want even endanger people’s lives.There are a lot of “what if’s” that have not been answered.
Personally I don’t want to be the frist person to die in a collison with a self-driving car because the car wasn’t paying attention. Geez most humans would not hit a giant bus and assume it would yield.