Friday, April 29, 2016

In Which I Pursue My Unhealthy Skepticism of the Current Outbreak of AI Enthusiasm.

While recently considering all things neural network-y and deep learning-y, I found myself  reflecting on the stink surrounding the success (or lack thereof) of recent image recognition efforts founded on neural networks.

It seems that simply adding certain kinds of noise to an image such that a human can not perceive the difference (i.e. the human can easily see that it is a picture of a panda) will completely flummox a neural network which could otherwise pick out the critter lurking in the image.  To me this sounded a bit like an overfitted solution - the program was focusing on something other than the desired trait, maybe the overall level of contrast or subtle differences in color.  Now the apologists cried, "Oh yeah?  Well humans fall prey to optical illusions, too!"  Indeed we do, but these are well understood while figuring out what a neural network is thinking is a challenge.

This story sent me searching for information on overfitting in neural networks and, lo, this is a major problem for NN's. We are safe, however.  There are techniques to prevent overfitting.  However, upon learning what these techniques are, they started sounding familiar.  You see, I did a my thesis on Genetic Algorithms.  GA's are a set of search algorithms inspired by genetic evolution.   Like other search algorithms (Simulated Annealing, Hill Climbing) Genetic Algorithms can go astray, for instance focusing on a so-so solution and failing to search wider for better matches.

Here's the rub: the techniques used to prevent Genetic Algorithms from failing are exactly analogous to those used for Neural Networks. It is widely understood among information scientists that the three algorithms mentioned above (GA, SN and HC) are members a single class of similar searches. Now, if cool kids' Deep Learning must be tuned using the same methods and parameters as the other algorithms in this class, should we assume that Deep Learning is in the class as well, not matter how well it is hiding the fact.

No one ever claimed that Genetic Algorithms were intelligent in any way.  So why should I believe that Deep Learning is anything more than a successful probabilistic search algorithm with big money backers.  I can safely remain a Good Old Fashion AI curmudgeon and treat Deep Learning as "another toy application"* of Neural Networks.

* - actual comment from a reviewer of a paper I submitted based on my GA thesis work :)

Update 11/22/2016: Apparently I was mistaken... Neural Networks represent Function Approximaters which are different from Search Algorithms but are similar in many ways.  My main points still stand.


Friday, March 25, 2016

Whitewashed Sepulchers

I must admit that I find it amusing that Microsoft had to pull the plug on their Tay ChatBot after it started spewing racist and fascist tweets.  Microsoft claims that the technology will help them learn to understand human interactions but, frankly, all these "Deep Learning" programs are turning out to be nothing more than really fancy search engines.  They have no understanding or context to what they are taught so when you feed it garbage it vomits garbage back at you.

On the one hand, the neural networks underlying most Deep Learning projects are black-boxes - it is nearly impossible to figure out what the network is learning and how.  On the other hand, the computer scientists working in the labs at Microsoft and Google should have the wisdom to know this and be cautious.  If only!

My point here, though, is how Microsoft rolled out their Tay 'bot. They claimed it was designed to mimic the informal and joking style of a teenager - a Millennial as they call it.  Why would they make this claim?  Simple.  People expect a teenager to be rough around the edges, her jokes to be crude and her conversation to be occasionally nonsensical.  One might say that this is necessary because the project is still in its infancy and will hit a lot of bumps before it matures, much like a girl crossing into young adulthood.

I suspect that this is hogwash.  What Microsoft's hedging is covering up is the fact that their chatbot has no idea what it is saying - it merely strings words and phrases together that it has "read" on the internet. It is like a parrot only not as intelligent.

We have seen this trick before:

  • Decades ago, a program called ELIZA cold fool people into thinking it was sentient by merely feeding users' entries back to them in the form of a question.
  • The winner of a recent Turing Test competition convinced 33% of the judges that it was one of the humans participating in a set of texting sessions by "creating the persona of a 13-year-old from Ukraine (young, cheeky, not entirely English-speaking). The judges may have expected less intelligent conversation and thereby were more easily fooled."

Lowering expectations may work at covering up a robot's complete lack of comprehension in a situation where human lives are not at stake (not to mention money!) but brings us no closer to a useful artificial intelligence. It is depressing that so much effort is being put into these projects and that so many people who should know better are heralding them as steps forward.

Sunday, March 6, 2016

Slow Progress

So, a Google Car caused a little accident with a bus the other day.

In a Feb. 23 report filed with California regulators, Google said the crash took place in Mountain View on Feb. 14 when a self-driving Lexus RX450h sought to get around some sandbags in a wide lane.Google said in the filing the autonomous vehicle was traveling at less than 2 miles per hour, while the bus was moving at about 15 miles per hour.But three seconds later, as the Google car in autonomous mode re-entered the center of the lane, it struck the side of the bus, causing damage to the left front fender, front wheel and a driver side sensor. No one was injured in the car or on the bus.

No biggie. Accidents happen.

Then there was this:

Google said it has reviewed this incident "and thousands of variations on it in our simulator in detail and made refinements to our software. From now on, our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future."
Okay, two thoughts...

1)  According to that last quote, Google engineers are apparently trying to give their car some knowledge of how bus drivers behave.  This "theory of mind" is laudable and considered among enlightened AI researchers to be necessary to achieve human-like levels of intelligence.  What bothers me is that they are making "refinements to their software."  This sounds like they have special code to predict how buses act, how pedestrians act, how bicycles act and how pigeons act.  I am sure these engineers are the tops in the business but this sounds like they are building a very large, yet fragile model - kind of like their plan to use a super-detailed map of the entire world so their cars never meet anything unexpected.  Big Data rules all! But it smells kinda like a dead-end. Pardon the pun!

To those who point to human drivers as the problem, I have already explained my concerns over different implementations of self-driving software introducing many of the same problems posed by human drivers.  It's just that there must now be a "theory of mind" between autonomous vehicles from different manufacturers (or versions of software from the same manufacturer)

2) The first quote points out that the car executed its maneuver at 2 miles per hour.  This reminds me of the time a G-car was "pulled over" for doing 25 in a 35 mph zone.  Apparently Google cars drive veeeerrrryyyy slooowwwwllllyyy.  Google likes to brag about how many millions of miles their cars have safely driven.  Are they all travelling at such freakishly low speeds?  What kind of test is that?  It sounds to me like the technology is so far from real-world that it must be used only at extremely low speeds while it struggles getting around a sandbag.  Should a car moving at 2 miles per hour really have trouble avoiding a bus moving at 15?

While the technological advances required to get even this far has been remarkable, I have seen no indication that there will be real world application (Put a 25 mph governor on every car and see how safe the streets are).

Wednesday, January 13, 2016

QOD - Software Development Methodologies

Waterfall replicates the social model of a dysfunctional organization with a defined hierarchy...
...Agile, then, replicates the social model of a dysfunctional organization without a well-defined hierarchy.

Michael O Church
https://michaelochurch.wordpress.com/2015/06/06/why-agile-and-especially-scrum-are-terrible/

Monday, December 14, 2015

Apple meets Google - at Hollywood and Vine

Imagine the scenario:
Google Self-Driving Car is driving along, integrating smoothly with all the other Google Cars which are in turn behaving in a predictable fashion never achieved by mere human drivers. Everything is a Geek's paradise. The Google Car pulls up to a four way stop. Thankfully, Googles engineers have solved the four-cars-at-an-intersection problem. Thousands of Google Cars have cleanly taken turns at four-way stops every day.

At this particular intersection something is different. Across from the Google Car is an iCar (manufactured by Apple) signalling to make a left turn.

The Google Car detects the other vehicle, runs through the driving rules devised to assure safe passage in this situation and starts into the intersection.

At the same time the iCar detects the Google Car, runs through the driving rules devised to assure safe passage in this situation and starts into the intersection, turning left.

The Google Car brakes to avoid the imminent collision. The iCar does likewise.

The Google Car starts forward.

The iCar starts forward.

Both cars brake.

The cars are at an impasse. Traffic backs up. Human technicians are called in.

Geek's paradise lost.

Some may quibble with the details here:  this or that technology might avoid this or the other part of the story.  My point is that the future scenarios in which self-driving cars rule the world seem to rely on the absence of human drivers. Humans are unpredictable.  It is easy to see why software developers want to remove them from the environment.

However, humans are not the only unpredictable element in your driving world.

Among the software elite, there is a movement toward what are being called microservices.  This represents the idea that if we can tightly define the data passed between two components (say, the program that takes your hamburger order and the one that debits your bank account), you avoid the vast majority of computer problems in the world.  The giant hole in this utopian vision is that the first component has to make all sorts of assumptions about what the second component is doing with this information.  Any small differences between the assumption and the actual behavior lead to nasty and difficult to find errors.

Normally, the programmer who creates the component throws all sorts of tests at it, even makes sure that every line of the program is exercised at least once.  Unfortunately those tests all incorporate the same assumptions that will cause errors out in the real world.  In our Google Car example, we can assume that Google has tested their driving computers in virtual and real world situations using other self-driving cars (Remember, humans have been banned) - mostly other Google Cars. Even the non-Google test vehicles will make assumptions about what the Tesla or iCar would do in a given situation.

The result of all this testing will be a smooth, hands-off transportation system.  Why? Because all the glaring problems will be found and corrected... and many the biases/assumptions of one Google Car fit perfectly into what the other Google Car is doing so that nothing unpredictable happens.

What happens when an iCar shows up that has been tested in exactly the same manner - with certain Apple programming biases fitting together so that nothing unpredictable ever happened?  In our example above, subtle differences in both cars programming convinced each that it had the right of way.  This never happened during testing at Apple because the iCars are programmed with the same, invisible assumptions.  Now think about rules for changing lanes.  Pulling out of a driveway.  Setting the speed for passing another vehicle.  Keeping a safe distance between vehicles.  The opportunities for mismatch are legion.

I am not criticizing the engineers and programmers involved.  They have taken on a daunting task.  I just don't like hearing excuses about the only problem being human drivers when in reality the entire enterprise is riddled with potential problems that are normal and predictable in engineering complex systems.  Not addressing this up front is a sign of the kind of hubris that has become a little too common in my industry.

Don't Download Cars!

As a follow-up to my ruminations about software executives getting into the car business, this...

But don’t call the Model S an autonomous car—it’s not quite there yet, though Musk says his vision is to eventually produce fully driverless cars without steering wheels or pedals. Instead, Tesla is billing the new capabilities as “autopilot” features that will occasionally require hands on the steering wheel.
“We explicitly describe [this software update] as a beta,” Musk said at a press briefing today (Oct. 14) in California. 

Never, NEVER, NEVER load Beta software onto a freaking CAR! Beta code is by definition not ready for wide release. You can't expect normal drivers to treat your Beta as a test. This is so mindnumbingly stupid that I am considering burning my Elon Musk fanboy card.

Thursday, October 22, 2015

You Can't Download a Car

When I was wee slip of a computer science student, I participated in a number of Software Engineering courses where the topic would come up: "Why don't we produce software like we do cars?"

The answers ranged from "Software is different" to "We don't the discipline NOT to make a tweek just as the software is leaving the factory"

Well, we are now getting the answer to the corollary: "What happens when software engineers start making cars?" It turns out that we could have predicted the result.

I am normally a fanboy for  Spacex and Elon Musk but this bit of news is worrying.  Seems that all those super-modern, super-fast electric sports cars made by Tesla are starting to show their age and owners are reporting issues raging from failing drive trains and charging systems to repeated problems with door handles to squeeks and leaks.

While a rocket needs to produce high performance for a limited amount of time, an modern automobile is expected to last 10+ years without major problems.

It will be interesting to see what happens as Google and Apple get into the car business.  Will their hubris as the darlings of the entrepreneurial world lead to their stumble in a business that requires patience and attention to unsexy issues like door handles instead of  bigger-than-life personalities and planned obsolescence?

Sunday, November 25, 2012

Bill Nye the raving lunatic?

My Sunday school class demonstrated today why I probably need to find a new hobby.  In the course of starting a discussion of Adam Hamiltons "Why?" (a standard theodicy apologetic) our class president brought up Bill Nye's recent video "slamming" creationism.  The response from those in attendance was self assured muttering about "can't combine faith and science" etc.  I suppose some of those who remained silent were as uncomfortable as I was but I wanted to scream, "You can't faith away the evidence - the earth is a couple of billions years old!!!"  Of course I had to hold my tongue and then sit on my hands while the class defended innocent suffering with silver-lining anecdotes.  Suffice it to say, I was in a bad mood akl day after that.

This afternoon, I decided to look at the Nye video to see just how much foam was actually flung from his lips.  I mean, based on the reaction, I thought that he must gone all Rush Limbaugh on them.  Well here it is.  Jeez, he is the calmest guy I have ever seen.  He is right; teaching that the evidence of scientific investigation is of no value when it contradicts faith is to devalue the scientific enterprise as a whole.  Not a good idea if you want a technologically advanced culture.

Edit: meant to say he "slammed" creationism, not evolution

Tuesday, July 31, 2012

"Free" Markets

“There is no such thing as a free-market... A market looks free only because we so unconditionally accept its underlying restrictions that we fail to see them.”
Ha-Joon Chang

(HT Connor Kilpatrick)

Monday, April 9, 2012

What's Happening to Journalism?!

First you have NPR's statement that they will no longer practice opinions-on-the-shape-of-the-earth-vary reporting.

Next you have NBC firing the producer who "tweeked" the Zimmerman/Martin 911 tapes.

And now The National Review - THE NATIONAL REVIEW, I say! - fires a contributor for a piece on protecting yourself from "black people"

It's almost as if journalists care about the truth!

All joking aside, Strauss and Howe's Generations and The Fourth Turning predicted that society would face a crisis of failing institutions that would require a re-working of of our national institutions and norms. I do believe that the above is one manifestation of this process of renewal. Another may be the Supreme Court's flirting with dismantling the mechanism underlying the operation of Medicare and other state/federal programs.