Friday, April 29, 2016

In Which I Pursue My Unhealthy Skepticism of the Current Outbreak of AI Enthusiasm.

While recently considering all things neural network-y and deep learning-y, I found myself  reflecting on the stink surrounding the success (or lack thereof) of recent image recognition efforts founded on neural networks.

It seems that simply adding certain kinds of noise to an image such that a human can not perceive the difference (i.e. the human can easily see that it is a picture of a panda) will completely flummox a neural network which could otherwise pick out the critter lurking in the image.  To me this sounded a bit like an overfitted solution - the program was focusing on something other than the desired trait, maybe the overall level of contrast or subtle differences in color.  Now the apologists cried, "Oh yeah?  Well humans fall prey to optical illusions, too!"  Indeed we do, but these are well understood while figuring out what a neural network is thinking is a challenge.

This story sent me searching for information on overfitting in neural networks and, lo, this is a major problem for NN's. We are safe, however.  There are techniques to prevent overfitting.  However, upon learning what these techniques are, they started sounding familiar.  You see, I did a my thesis on Genetic Algorithms.  GA's are a set of search algorithms inspired by genetic evolution.   Like other search algorithms (Simulated Annealing, Hill Climbing) Genetic Algorithms can go astray, for instance focusing on a so-so solution and failing to search wider for better matches.

Here's the rub: the techniques used to prevent Genetic Algorithms from failing are exactly analogous to those used for Neural Networks. It is widely understood among information scientists that the three algorithms mentioned above (GA, SN and HC) are members a single class of similar searches. Now, if cool kids' Deep Learning must be tuned using the same methods and parameters as the other algorithms in this class, should we assume that Deep Learning is in the class as well, not matter how well it is hiding the fact.

No one ever claimed that Genetic Algorithms were intelligent in any way.  So why should I believe that Deep Learning is anything more than a successful probabilistic search algorithm with big money backers.  I can safely remain a Good Old Fashion AI curmudgeon and treat Deep Learning as "another toy application"* of Neural Networks.

* - actual comment from a reviewer of a paper I submitted based on my GA thesis work :)

Update 11/22/2016: Apparently I was mistaken... Neural Networks represent Function Approximaters which are different from Search Algorithms but are similar in many ways.  My main points still stand.


Friday, March 25, 2016

Whitewashed Sepulchers

I must admit that I find it amusing that Microsoft had to pull the plug on their Tay ChatBot after it started spewing racist and fascist tweets.  Microsoft claims that the technology will help them learn to understand human interactions but, frankly, all these "Deep Learning" programs are turning out to be nothing more than really fancy search engines.  They have no understanding or context to what they are taught so when you feed it garbage it vomits garbage back at you.

On the one hand, the neural networks underlying most Deep Learning projects are black-boxes - it is nearly impossible to figure out what the network is learning and how.  On the other hand, the computer scientists working in the labs at Microsoft and Google should have the wisdom to know this and be cautious.  If only!

My point here, though, is how Microsoft rolled out their Tay 'bot. They claimed it was designed to mimic the informal and joking style of a teenager - a Millennial as they call it.  Why would they make this claim?  Simple.  People expect a teenager to be rough around the edges, her jokes to be crude and her conversation to be occasionally nonsensical.  One might say that this is necessary because the project is still in its infancy and will hit a lot of bumps before it matures, much like a girl crossing into young adulthood.

I suspect that this is hogwash.  What Microsoft's hedging is covering up is the fact that their chatbot has no idea what it is saying - it merely strings words and phrases together that it has "read" on the internet. It is like a parrot only not as intelligent.

We have seen this trick before:

  • Decades ago, a program called ELIZA cold fool people into thinking it was sentient by merely feeding users' entries back to them in the form of a question.
  • The winner of a recent Turing Test competition convinced 33% of the judges that it was one of the humans participating in a set of texting sessions by "creating the persona of a 13-year-old from Ukraine (young, cheeky, not entirely English-speaking). The judges may have expected less intelligent conversation and thereby were more easily fooled."

Lowering expectations may work at covering up a robot's complete lack of comprehension in a situation where human lives are not at stake (not to mention money!) but brings us no closer to a useful artificial intelligence. It is depressing that so much effort is being put into these projects and that so many people who should know better are heralding them as steps forward.

Sunday, March 6, 2016

Slow Progress

So, a Google Car caused a little accident with a bus the other day.

In a Feb. 23 report filed with California regulators, Google said the crash took place in Mountain View on Feb. 14 when a self-driving Lexus RX450h sought to get around some sandbags in a wide lane.Google said in the filing the autonomous vehicle was traveling at less than 2 miles per hour, while the bus was moving at about 15 miles per hour.But three seconds later, as the Google car in autonomous mode re-entered the center of the lane, it struck the side of the bus, causing damage to the left front fender, front wheel and a driver side sensor. No one was injured in the car or on the bus.

No biggie. Accidents happen.

Then there was this:

Google said it has reviewed this incident "and thousands of variations on it in our simulator in detail and made refinements to our software. From now on, our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future."
Okay, two thoughts...

1)  According to that last quote, Google engineers are apparently trying to give their car some knowledge of how bus drivers behave.  This "theory of mind" is laudable and considered among enlightened AI researchers to be necessary to achieve human-like levels of intelligence.  What bothers me is that they are making "refinements to their software."  This sounds like they have special code to predict how buses act, how pedestrians act, how bicycles act and how pigeons act.  I am sure these engineers are the tops in the business but this sounds like they are building a very large, yet fragile model - kind of like their plan to use a super-detailed map of the entire world so their cars never meet anything unexpected.  Big Data rules all! But it smells kinda like a dead-end. Pardon the pun!

To those who point to human drivers as the problem, I have already explained my concerns over different implementations of self-driving software introducing many of the same problems posed by human drivers.  It's just that there must now be a "theory of mind" between autonomous vehicles from different manufacturers (or versions of software from the same manufacturer)

2) The first quote points out that the car executed its maneuver at 2 miles per hour.  This reminds me of the time a G-car was "pulled over" for doing 25 in a 35 mph zone.  Apparently Google cars drive veeeerrrryyyy slooowwwwllllyyy.  Google likes to brag about how many millions of miles their cars have safely driven.  Are they all travelling at such freakishly low speeds?  What kind of test is that?  It sounds to me like the technology is so far from real-world that it must be used only at extremely low speeds while it struggles getting around a sandbag.  Should a car moving at 2 miles per hour really have trouble avoiding a bus moving at 15?

While the technological advances required to get even this far has been remarkable, I have seen no indication that there will be real world application (Put a 25 mph governor on every car and see how safe the streets are).

Wednesday, January 13, 2016

QOD - Software Development Methodologies

Waterfall replicates the social model of a dysfunctional organization with a defined hierarchy...
...Agile, then, replicates the social model of a dysfunctional organization without a well-defined hierarchy.

Michael O Church
https://michaelochurch.wordpress.com/2015/06/06/why-agile-and-especially-scrum-are-terrible/