Friday, April 29, 2016

In Which I Pursue My Unhealthy Skepticism of the Current Outbreak of AI Enthusiasm.

While recently considering all things neural network-y and deep learning-y, I found myself  reflecting on the stink surrounding the success (or lack thereof) of recent image recognition efforts founded on neural networks.

It seems that simply adding certain kinds of noise to an image such that a human can not perceive the difference (i.e. the human can easily see that it is a picture of a panda) will completely flummox a neural network which could otherwise pick out the critter lurking in the image.  To me this sounded a bit like an overfitted solution - the program was focusing on something other than the desired trait, maybe the overall level of contrast or subtle differences in color.  Now the apologists cried, "Oh yeah?  Well humans fall prey to optical illusions, too!"  Indeed we do, but these are well understood while figuring out what a neural network is thinking is a challenge.

This story sent me searching for information on overfitting in neural networks and, lo, this is a major problem for NN's. We are safe, however.  There are techniques to prevent overfitting.  However, upon learning what these techniques are, they started sounding familiar.  You see, I did a my thesis on Genetic Algorithms.  GA's are a set of search algorithms inspired by genetic evolution.   Like other search algorithms (Simulated Annealing, Hill Climbing) Genetic Algorithms can go astray, for instance focusing on a so-so solution and failing to search wider for better matches.

Here's the rub: the techniques used to prevent Genetic Algorithms from failing are exactly analogous to those used for Neural Networks. It is widely understood among information scientists that the three algorithms mentioned above (GA, SN and HC) are members a single class of similar searches. Now, if cool kids' Deep Learning must be tuned using the same methods and parameters as the other algorithms in this class, should we assume that Deep Learning is in the class as well, not matter how well it is hiding the fact.

No one ever claimed that Genetic Algorithms were intelligent in any way.  So why should I believe that Deep Learning is anything more than a successful probabilistic search algorithm with big money backers.  I can safely remain a Good Old Fashion AI curmudgeon and treat Deep Learning as "another toy application"* of Neural Networks.

* - actual comment from a reviewer of a paper I submitted based on my GA thesis work :)

Update 11/22/2016: Apparently I was mistaken... Neural Networks represent Function Approximaters which are different from Search Algorithms but are similar in many ways.  My main points still stand.