Are Genetic Algorithms Efficient?
Much of December was spent struggling with the List Processing language. I believed I could use genetic algorithms, similar to what I had done for advanced pattern recognition, and apply that technique to an emerging field called ‘website morphing’, which is coming up in a forthcoming paper. A ‘morphing’ website is one that actively changes itself in response to a user input. I’m not talking about changing which pages are displayed based on a series of clicked links as is the case with Web 1.0, but rather, how a user is interacting with the site. It’s a direct use of web analytics to enhance the user experience in real time (if we take the notion of morphing to its extreme).
The application of genetic algorithms is a relatively young field. A few scientists have been playing around with them for at least three decades, at least back to the era of the PDP-11, with the delicious ability for a machine to rewrite its own executable code. Recently, it’s started to really take off thanks to an entire field of complexity economics called “evolutionary economics”. It’s a method of grappling with the complexity of the world by way of how biological systems grapple with the complexity of the world.
In retrospect, much of my work in the field of genetic algorithms was based around a fairly ignorant, or idealistic, view of complexity and aggregate behavior. I believed that the core human behavior is quasi-random, and one could discover patters therein. It’s almost like, if you feed a genetic algorithm a set of inputs and a success criterion; it can calculate the optimal equation in very short order. For instance, a genetic algorithm could determine the equation for calculating the area of a circle, without knowing that equation. Genetic algorithms can learn – but it’s much like a million monkeys pounding at typewriters. There isn’t any intelligence implied there.
One of the problems with genetic programming is the efficiency, and this concept of “the efficiency of algorithms” is quixotic.
Consider lists. Assume an unordered list of numbers 100 numbers long. You got numbers like 44, 908, 233, and so on. You’re asked to sort them as efficiently as possible. There are a number of methods for putting an unordered list into an ordered state. One method is called a bubble sort, where, starting at the beginning of the list, you take the first number, and compare with each successive number in the list. If the first number is greater than the second number being compared, it checks the third element, then the fourth, and so on, until that number “bubbles up” to the top-most position in the list. Then it goes to the second number in the list, and starts the comparison all over again. The efficiency of this algorithm – this constant comparison – is a prime example of a greedy algorithm. It’s not very efficient. (If the list is already sorted from highest to lowest, the number of comparison and moving operations scales to the power of two).
There are other algorithms, like selection sorts or heap sorts, which are more sophisticated in their comparisons, and as a result, are much more efficient.
Are genetic algorithms – which are essentially millions of experiments that are not driven by initial informed hypothesis – efficient?
The answer to this question, in my mind, is absolutely critical to an entire body of what I’m terming: “Web 4.0 – the morphing web” mathematics and web analytics. It’s Web 4.0 because the technology would probably rely on Web 3.0 technologies “the semantic web”, as a key input (but not necessarily).
I’m not good at proofs. I am, however, very good at writing some of greediest programming methods ever. As a result, I’m probably not the right person to answer that fundamental question.
If genetic algorithms are not greedy, their application to dynamic morphing as the primary driver would be key. If they are greedy, it’s time to move along.