Friday, March 25, 2016

Whitewashed Sepulchers

I must admit that I find it amusing that Microsoft had to pull the plug on their Tay ChatBot after it started spewing racist and fascist tweets.  Microsoft claims that the technology will help them learn to understand human interactions but, frankly, all these "Deep Learning" programs are turning out to be nothing more than really fancy search engines.  They have no understanding or context to what they are taught so when you feed it garbage it vomits garbage back at you.

On the one hand, the neural networks underlying most Deep Learning projects are black-boxes - it is nearly impossible to figure out what the network is learning and how.  On the other hand, the computer scientists working in the labs at Microsoft and Google should have the wisdom to know this and be cautious.  If only!

My point here, though, is how Microsoft rolled out their Tay 'bot. They claimed it was designed to mimic the informal and joking style of a teenager - a Millennial as they call it.  Why would they make this claim?  Simple.  People expect a teenager to be rough around the edges, her jokes to be crude and her conversation to be occasionally nonsensical.  One might say that this is necessary because the project is still in its infancy and will hit a lot of bumps before it matures, much like a girl crossing into young adulthood.

I suspect that this is hogwash.  What Microsoft's hedging is covering up is the fact that their chatbot has no idea what it is saying - it merely strings words and phrases together that it has "read" on the internet. It is like a parrot only not as intelligent.

We have seen this trick before:

  • Decades ago, a program called ELIZA cold fool people into thinking it was sentient by merely feeding users' entries back to them in the form of a question.
  • The winner of a recent Turing Test competition convinced 33% of the judges that it was one of the humans participating in a set of texting sessions by "creating the persona of a 13-year-old from Ukraine (young, cheeky, not entirely English-speaking). The judges may have expected less intelligent conversation and thereby were more easily fooled."

Lowering expectations may work at covering up a robot's complete lack of comprehension in a situation where human lives are not at stake (not to mention money!) but brings us no closer to a useful artificial intelligence. It is depressing that so much effort is being put into these projects and that so many people who should know better are heralding them as steps forward.

Sunday, March 6, 2016

Slow Progress

So, a Google Car caused a little accident with a bus the other day.

In a Feb. 23 report filed with California regulators, Google said the crash took place in Mountain View on Feb. 14 when a self-driving Lexus RX450h sought to get around some sandbags in a wide lane.Google said in the filing the autonomous vehicle was traveling at less than 2 miles per hour, while the bus was moving at about 15 miles per hour.But three seconds later, as the Google car in autonomous mode re-entered the center of the lane, it struck the side of the bus, causing damage to the left front fender, front wheel and a driver side sensor. No one was injured in the car or on the bus.

No biggie. Accidents happen.

Then there was this:

Google said it has reviewed this incident "and thousands of variations on it in our simulator in detail and made refinements to our software. From now on, our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future."
Okay, two thoughts...

1)  According to that last quote, Google engineers are apparently trying to give their car some knowledge of how bus drivers behave.  This "theory of mind" is laudable and considered among enlightened AI researchers to be necessary to achieve human-like levels of intelligence.  What bothers me is that they are making "refinements to their software."  This sounds like they have special code to predict how buses act, how pedestrians act, how bicycles act and how pigeons act.  I am sure these engineers are the tops in the business but this sounds like they are building a very large, yet fragile model - kind of like their plan to use a super-detailed map of the entire world so their cars never meet anything unexpected.  Big Data rules all! But it smells kinda like a dead-end. Pardon the pun!

To those who point to human drivers as the problem, I have already explained my concerns over different implementations of self-driving software introducing many of the same problems posed by human drivers.  It's just that there must now be a "theory of mind" between autonomous vehicles from different manufacturers (or versions of software from the same manufacturer)

2) The first quote points out that the car executed its maneuver at 2 miles per hour.  This reminds me of the time a G-car was "pulled over" for doing 25 in a 35 mph zone.  Apparently Google cars drive veeeerrrryyyy slooowwwwllllyyy.  Google likes to brag about how many millions of miles their cars have safely driven.  Are they all travelling at such freakishly low speeds?  What kind of test is that?  It sounds to me like the technology is so far from real-world that it must be used only at extremely low speeds while it struggles getting around a sandbag.  Should a car moving at 2 miles per hour really have trouble avoiding a bus moving at 15?

While the technological advances required to get even this far has been remarkable, I have seen no indication that there will be real world application (Put a 25 mph governor on every car and see how safe the streets are).