In a Feb. 23 report filed with California regulators, Google said the crash took place in Mountain View on Feb. 14 when a self-driving Lexus RX450h sought to get around some sandbags in a wide lane.Google said in the filing the autonomous vehicle was traveling at less than 2 miles per hour, while the bus was moving at about 15 miles per hour.But three seconds later, as the Google car in autonomous mode re-entered the center of the lane, it struck the side of the bus, causing damage to the left front fender, front wheel and a driver side sensor. No one was injured in the car or on the bus.
No biggie. Accidents happen.
Then there was this:
Okay, two thoughts...
Google said it has reviewed this incident "and thousands of variations on it in our simulator in detail and made refinements to our software. From now on, our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future."
1) According to that last quote, Google engineers are apparently trying to give their car some knowledge of how bus drivers behave. This "theory of mind" is laudable and considered among enlightened AI researchers to be necessary to achieve human-like levels of intelligence. What bothers me is that they are making "refinements to their software." This sounds like they have special code to predict how buses act, how pedestrians act, how bicycles act and how pigeons act. I am sure these engineers are the tops in the business but this sounds like they are building a very large, yet fragile model - kind of like their plan to use a super-detailed map of the entire world so their cars never meet anything unexpected. Big Data rules all! But it smells kinda like a dead-end. Pardon the pun!
To those who point to human drivers as the problem, I have already explained my concerns over different implementations of self-driving software introducing many of the same problems posed by human drivers. It's just that there must now be a "theory of mind" between autonomous vehicles from different manufacturers (or versions of software from the same manufacturer)
2) The first quote points out that the car executed its maneuver at 2 miles per hour. This reminds me of the time a G-car was "pulled over" for doing 25 in a 35 mph zone. Apparently Google cars drive veeeerrrryyyy slooowwwwllllyyy. Google likes to brag about how many millions of miles their cars have safely driven. Are they all travelling at such freakishly low speeds? What kind of test is that? It sounds to me like the technology is so far from real-world that it must be used only at extremely low speeds while it struggles getting around a sandbag. Should a car moving at 2 miles per hour really have trouble avoiding a bus moving at 15?
While the technological advances required to get even this far has been remarkable, I have seen no indication that there will be real world application (Put a 25 mph governor on every car and see how safe the streets are).
No comments:
Post a Comment