Monday, December 14, 2015

Apple meets Google - at Hollywood and Vine

Imagine the scenario:
Google Self-Driving Car is driving along, integrating smoothly with all the other Google Cars which are in turn behaving in a predictable fashion never achieved by mere human drivers. Everything is a Geek's paradise. The Google Car pulls up to a four way stop. Thankfully, Googles engineers have solved the four-cars-at-an-intersection problem. Thousands of Google Cars have cleanly taken turns at four-way stops every day.

At this particular intersection something is different. Across from the Google Car is an iCar (manufactured by Apple) signalling to make a left turn.

The Google Car detects the other vehicle, runs through the driving rules devised to assure safe passage in this situation and starts into the intersection.

At the same time the iCar detects the Google Car, runs through the driving rules devised to assure safe passage in this situation and starts into the intersection, turning left.

The Google Car brakes to avoid the imminent collision. The iCar does likewise.

The Google Car starts forward.

The iCar starts forward.

Both cars brake.

The cars are at an impasse. Traffic backs up. Human technicians are called in.

Geek's paradise lost.

Some may quibble with the details here:  this or that technology might avoid this or the other part of the story.  My point is that the future scenarios in which self-driving cars rule the world seem to rely on the absence of human drivers. Humans are unpredictable.  It is easy to see why software developers want to remove them from the environment.

However, humans are not the only unpredictable element in your driving world.

Among the software elite, there is a movement toward what are being called microservices.  This represents the idea that if we can tightly define the data passed between two components (say, the program that takes your hamburger order and the one that debits your bank account), you avoid the vast majority of computer problems in the world.  The giant hole in this utopian vision is that the first component has to make all sorts of assumptions about what the second component is doing with this information.  Any small differences between the assumption and the actual behavior lead to nasty and difficult to find errors.

Normally, the programmer who creates the component throws all sorts of tests at it, even makes sure that every line of the program is exercised at least once.  Unfortunately those tests all incorporate the same assumptions that will cause errors out in the real world.  In our Google Car example, we can assume that Google has tested their driving computers in virtual and real world situations using other self-driving cars (Remember, humans have been banned) - mostly other Google Cars. Even the non-Google test vehicles will make assumptions about what the Tesla or iCar would do in a given situation.

The result of all this testing will be a smooth, hands-off transportation system.  Why? Because all the glaring problems will be found and corrected... and many the biases/assumptions of one Google Car fit perfectly into what the other Google Car is doing so that nothing unpredictable happens.

What happens when an iCar shows up that has been tested in exactly the same manner - with certain Apple programming biases fitting together so that nothing unpredictable ever happened?  In our example above, subtle differences in both cars programming convinced each that it had the right of way.  This never happened during testing at Apple because the iCars are programmed with the same, invisible assumptions.  Now think about rules for changing lanes.  Pulling out of a driveway.  Setting the speed for passing another vehicle.  Keeping a safe distance between vehicles.  The opportunities for mismatch are legion.

I am not criticizing the engineers and programmers involved.  They have taken on a daunting task.  I just don't like hearing excuses about the only problem being human drivers when in reality the entire enterprise is riddled with potential problems that are normal and predictable in engineering complex systems.  Not addressing this up front is a sign of the kind of hubris that has become a little too common in my industry.

Don't Download Cars!

As a follow-up to my ruminations about software executives getting into the car business, this...

But don’t call the Model S an autonomous car—it’s not quite there yet, though Musk says his vision is to eventually produce fully driverless cars without steering wheels or pedals. Instead, Tesla is billing the new capabilities as “autopilot” features that will occasionally require hands on the steering wheel.
“We explicitly describe [this software update] as a beta,” Musk said at a press briefing today (Oct. 14) in California. 

Never, NEVER, NEVER load Beta software onto a freaking CAR! Beta code is by definition not ready for wide release. You can't expect normal drivers to treat your Beta as a test. This is so mindnumbingly stupid that I am considering burning my Elon Musk fanboy card.