Friday, March 25, 2016

Whitewashed Sepulchers

I must admit that I find it amusing that Microsoft had to pull the plug on their Tay ChatBot after it started spewing racist and fascist tweets.  Microsoft claims that the technology will help them learn to understand human interactions but, frankly, all these "Deep Learning" programs are turning out to be nothing more than really fancy search engines.  They have no understanding or context to what they are taught so when you feed it garbage it vomits garbage back at you.

On the one hand, the neural networks underlying most Deep Learning projects are black-boxes - it is nearly impossible to figure out what the network is learning and how.  On the other hand, the computer scientists working in the labs at Microsoft and Google should have the wisdom to know this and be cautious.  If only!

My point here, though, is how Microsoft rolled out their Tay 'bot. They claimed it was designed to mimic the informal and joking style of a teenager - a Millennial as they call it.  Why would they make this claim?  Simple.  People expect a teenager to be rough around the edges, her jokes to be crude and her conversation to be occasionally nonsensical.  One might say that this is necessary because the project is still in its infancy and will hit a lot of bumps before it matures, much like a girl crossing into young adulthood.

I suspect that this is hogwash.  What Microsoft's hedging is covering up is the fact that their chatbot has no idea what it is saying - it merely strings words and phrases together that it has "read" on the internet. It is like a parrot only not as intelligent.

We have seen this trick before:

  • Decades ago, a program called ELIZA cold fool people into thinking it was sentient by merely feeding users' entries back to them in the form of a question.
  • The winner of a recent Turing Test competition convinced 33% of the judges that it was one of the humans participating in a set of texting sessions by "creating the persona of a 13-year-old from Ukraine (young, cheeky, not entirely English-speaking). The judges may have expected less intelligent conversation and thereby were more easily fooled."

Lowering expectations may work at covering up a robot's complete lack of comprehension in a situation where human lives are not at stake (not to mention money!) but brings us no closer to a useful artificial intelligence. It is depressing that so much effort is being put into these projects and that so many people who should know better are heralding them as steps forward.

No comments: