Article Preview
Buy Now
COLUMN
Genetic Algorithms
Polish Your Chromosomes
Issue: 1.6 (June/July 2003)
Author: Matt Neuburg
Author Bio: Matt Neuburg is the author of
Article Description: No description available.
Article Length (in bytes): 11,167
Starting Page Number: 34
Article Number: 1617
Related Web Link(s):
http://www.scs.carleton.ca/~csgs/resources/gaal.html
http://www-106.ibm.com/developerworks/linux/library/l-genperl2/?ca=dgr-lnxw03GenAlg2.webloc
http://www.geatbx.com/docu/algoverv.html.webloc
Excerpt of article text...
Genetic algorithms are the subject of a great deal of research and scientific literature (see reference 1). I don't know enough about them to have an opinion as to whether they should truly be considered algorithms at all, nor do I know whether they can really be useful problem-solving devices or whether they are more a kind of game or simulation. However, they are certainly an intriguing topic, and lots of fun to play with; and there isn't the slightest reason why REALbasic shouldn't be your playpen.
My own mild foray into the world of genetic algorithms was sparked by an online article by Teodor Zlatanov implementing some genetic algorithms in Perl (see reference 2). At first I merely wanted to translate Zlatanov's implementation into REALbasic. But I soon discovered some incoherencies in his algorithm, and then, in the course of correcting these, found myself exploring genetic algorithms a little more deeply.
A genetic algorithm is a simulation of Mendelian/Darwinian evolution. Start with a population of individuals, each endowed with some DNA. We regard each individual's DNA as a possible answer to a problem, and we rank it according to how good an answer it is. We take the individuals with the best DNA and mate them, generating a new population, repeating the process again and again. The idea is that over many generations the population's DNA should be greatly improved, with the best DNA representing a much better answer to our problem. This answer, you understand, will not be the product of our own devising, but of the rules of selection and breeding, combined with the randomness inherent in those rules. Evolution itself will "find" an answer.
My example, which comes directly from Zlatanov, uses a bytestream as the individual's DNA. We translate every byte into a character, either a digit (0 through 9) or a capital letter (A through Z) or a space, thus ending up with a sequence of "words", which we enter into an array; we then evaluate the DNA by treating this array, word by word, as follows.
The DNA's initial value is zero. A word like "9BJ6" is bad, and reduces the cumulative value of the DNA by multiplying it by 0.8. A word comprised entirely of digits, like "46", adds to the cumulative value of the DNA a number of points equal to its length. The word "M" doubles the cumulative value of the DNA. And a word like "AG12589" -- the letter "A" followed by only non-digits followed by only digits -- multiplies the cumulative value of the DNA by the numeric value of its digits part.
Now, to be sure, we're not really attempting to solve any earth-shattering problem here; we're just playing around, to see how our population will evolve over time in response to the pressure of these rules. The interesting thing is that it evolves very well indeed. In a typical run of 100 generations, starting with completely random DNA, I ended up with a generation where the best individual's DNA (ignoring extra spaces) was this:
...End of Excerpt. Please purchase the magazine to read the full article.