Seven Differences Between Genetic Programming and Other Approaches to Machine Learning and Artificial Intelligence

Genetic programming is not the only possible approach to the challenge of getting a computer to do what needs to be done, without telling it how to do it. However, genetic programming currently possesses 16 important attributes that one would expect of a system of automatic programming. In contrast, no other method of artificial intelligence, machine learning, neural networks, adaptive systems, reinforcement learning, or automated logic currently possesses more than a few of these 16 attributes Moreover, there are now a number of instances where genetic programming has automatically produced numerous instances of human-competitive machine intelligence.

Genetic programming is different from all other approaches to artificial intelligence, machine learning, neural networks, adaptive systems, reinforcement learning, or automated logic in all (or most) of the following seven ways:

(1) Representation: Genetic programming overtly conducts it search for a solution to the given problem in program space.

(2) Role of point-to-point transformations in the search: Genetic programming does not conduct its search by transforming a single point in the search space into another single point, but instead transforms a set of points into another set of points.

(3) Role of hill climbing in the search: Genetic programming does not rely exclusively on greedy hill climbing to conduct its search, but instead allocates a certain number of trials, in a principled way, to choices that are known to be inferior.

(4) Role of determinism in the search: Genetic programming conducts its search probabilistically.

(5) Role of an explicit knowledge base: None.

(6) Role of formal logic in the search: None.

(7) Underpinnings of the technique: Biologically inspired.

First, consider the issue of representation. Most techniques of artificial intelligence, machine learning, neural networks, adaptive systems, reinforcement learning, or automated logic employ specialized structures in lieu of ordinary computer programs. These surrogate structures include if-then production rules, Horn clauses, decision trees, Bayesian networks, propositional logic, formal grammars, binary decision diagrams, frames, conceptual clusters, concept sets, numerical weight vectors (for neural nets), vectors of numerical coefficients for polynomials or other fixed expressions (for adaptive systems), genetic classifier system rules, fixed tables of values (as in reinforcement learning), or linear chromosome strings (as in the conventional genetic algorithm).

Tellingly, except in unusual situations, the world's several million computer programmers do not use any of these surrogate structures for writing computer programs. Instead, for five decades, human programmers have persisted in writing computer programs that intermix a multiplicity of types of computations (e.g., arithmetic and logical) operating on a multiplicity of types of variables (e.g., integer, floating-point, and Boolean). Programmers have persisted in using internal memory to store the results of intermediate calculations in order to avoid repeating the calculation on each occasion when the result is needed. They have persisted in using iterations and recursions. They have similarly persisted in organizing useful sequences of operations into reusable groups (subroutines) so that they avoid reinventing the wheel on each occasion when they need a particular sequence of operations. Moreover, they have persisted in passing parameters to subroutines so that they can reuse their subroutines with different instantiations of values. And they have persisted in organizing their subroutines into hierarchies.

All of the above tools of ordinary computer programming have been in use since the beginning of the era of electronic computers in the l940s. Significantly, none has fallen into disuse by human programmers. Yet, in spite of the manifest utility of these everyday tools of computer programming, these tools are largely absent from existing techniques of automated machine learning, neural networks, artificial intelligence, adaptive systems, reinforcement learning, and automated logic. On one of the relatively rare occasions when one or two of these everyday tools of computer programming is available within the context of one of these automated techniques, they are usually available only in a hobbled and barely recognizable form. In contrast, genetic programming draws on the full arsenal of tools that human programmers have found useful for five decades. It conducts its search for a solution to a problem overtly in the space of computer programs. Our view is that computer programs are the best representation of computer programs. We believe that the search for a solution to the challenge of getting computers to solve problems without explicitly programming them should be conducted in the space of computer programs.

Of course, once you realize that the search should be conducted in program space, you are immediately faced with the task of finding the desired program in the enormous space of possible programs. As will be seen, genetic programming performs this task of program discovery. It provides a problem-independent way to productively search the space of possible computer programs to find a program that satisfactorily solves the given problem.

Second, another difference between genetic programming and almost every automated technique concerns the nature of the search conducted in the technique's chosen search space. Almost all of these non-genetic methods employ a point-to-point strategy that transforms a single point in the search space into another single point. Genetic programming is different in that it operates by explicitly cultivating a diverse population of often-inconsistent and often-contradictory approaches to solving the problem. Genetic programming performs a beam search in program space by iteratively transforming one population of candidate computer programs into a new population of programs.

Third, consider the role of hill climbing. When the trajectory through the search space is from one single point to another single point, there is a nearly irresistible temptation to extend the search only by moving to a point that is known to be superior to the current point. Consequently, almost all automated techniques rely exclusively on greedy hill climbing to make the transformation from the current point in the search space to the next point. The temptation to rely on hill climbing is reinforced because many of the toy problems in the literature of the fields of machine learning and artificial intelligence are so simple that they can, in fact, be solved by hill climbing. However, popularity cannot cure the innate tendency of hill climbing to become trapped on a local optimum that is not a global optimum. Interesting and nontrivial problems generally have high-payoff points that are inaccessible to greedy hill climbing. In fact, the existence of points in the search space that are not accessible to hill climbing is a good working definition of non-triviality. The fact that genetic programming does not rely on a point-to-point search strategy helps to liberate it from the myopia of hill climbing. Genetic programming is free to allocate a certain measured number of trials to points that are known to be inferior. This allocation of trials to known-inferior individuals is not motivated by charity, but in the expectation that it will often unearth an unobvious trajectory through the search space leading to points with an ultimately higher payoff. The fact that genetic programming operates from a population enables it to make a small number of adventurous moves while simultaneously pursuing the more immediately gratifying avenues of advance through the search space. Of course, genetic programming is not the only search technique that avoids mere hill climbing. For example, both simulated annealing (Kirkpatrick, Gelatt, and Vecchi 1983; Aarts and Korst 1989) and genetic algorithms (Holland 1975) allocate a certain number of trials to inferior points in a similar principled way. However, most of the techniques currently used in the fields of artificial intelligence, machine learning, neural networks, adaptive systems, reinforcement learning, or automated logic are trapped on the local optimum of hill climbing.

Fourth, another difference between genetic programming and almost every other technique of artificial intelligence and machine learning is that genetic programming conducts a probabilistic search. Again, genetic programming is not unique in this respect. For example, simulated annealing and genetic algorithms are also probabilistic. However, most existing automated techniques are deterministic.

Fifth, consider the role of a knowledge base in the pursuit of the goal of automatically creating computer programs. Many computer scientists unquestioningly assume that formal logic must play a preeminent role in any system for automatically creating computer programs. Similarly, the vast majority of contemporary researchers in artificial intelligence believe that a system for automatically creating computer programs must employ an explicit knowledge base. Indeed, over the past four decades, the field of artificial intelligence has been dominated by the strongly asserted belief that the goal of getting a computer to solve problems automatically can be achieved only by means of formal logic inference methods and knowledge. This approach typically entails the selection of a knowledge representation, the acquisition of the knowledge, the codification of the knowledge into a knowledge base, the depositing of the knowledge base into a computer, and the manipulation of the knowledge in the computer using the inference methods of formal logic. As Lenat (1983) stated,

"All our experiences in AI research have led us to believe that for automatic programming, the answer lies in knowledge, in adding a collection of expert rules which guide code synthesis and transformation. (Emphasis in original)."

However, the existence of a strenuously asserted belief for four decades does not, in itself, validate the belief. Moreover, the existence or popularity of a belief does preclude the possibility that there might be an alternative way of achieving a particular goal. Conspicuously, genetic programming does not rely on an explicit knowledge base to achieve the goal of automatically creating computer programs. While there are numerous optional ways to incorporate domain knowledge into a run of genetic programming, genetic programming does not require (or usually use) an explicit knowledge base to guide its search.

Sixth, consider the role of the inference methods of formal logic. Many computer scientists unquestioningly assume that every problem-solving technique must be logically sound, deterministic, logically consistent, and parsimonious. Accordingly, most conventional methods of artificial intelligence and machine learning possess these characteristics. However, logic does not govern two of the most important types of complex problem-solving processes, namely, the invention process performed by creative humans and the evolutionary process occurring in nature.

A new idea that can be logically deduced from facts that are known in a field, using transformations that are known in a field, is not considered to be an invention. There must be what the patent law refers to as an "illogical step" (i.e., an unjustified step) to distinguish a putative invention from that which is readily deducible from that which is already known. Humans supply the critical ingredient of "illogic" to the invention process. Interestingly, everyday usage parallels the patent law concerning inventiveness: people who mechanically apply existing facts in well-known ways are summarily dismissed as being uncreative. Logical thinking is unquestionably useful for many purposes. It usually plays an important role in setting the stage for an invention. But, at the end of the day, logical thinking is insufficient for invention and creativity.

Recalling his invention in 1927 of the negative feedback amplifier, Harold S. Black (1977) of Bell Laboratories said

"Then came the morning of Tuesday, August 2, 1927, when the concept of the negative feedback amplifier came to me in a flash while I was crossing the Hudson River on the Lackawanna Ferry, on my way to work. For more than 50 years, I have pondered how and why the idea came, and I can't say any more today than I could that morning. All I know is that after several years of hard work on the problem, I suddenly realized that if I fed the amplifier output back to the input, in reverse phase, and kept the device from oscillating (singing, as we called it then),

"I would have exactly what I wanted: a means of canceling out the distortion of the output. I opened my morning newspaper and on a page of The New York Times, I sketched a simple canonical diagram of a negative feedback amplifier plus the equations for the amplification with feedback.

Of course, inventors are not oblivious to logic and knowledge. They do not thrash around using blind random search. Black did not try to construct the negative feedback amplifier from neon bulbs or doorbells. Instead, "several years of hard work on the problem" set the stage and brought his thinking into the proximity of a solution. Then, at the critical moment, Black made his "illogical" leap. This unjustified leap constituted the invention.

The design of complex entities by the evolutionary process in nature is another important type of problem solving that is not governed by logic. In nature, solutions to design problems are discovered by the probabilistic process of evolution and natural selection. This is not a logical process. Indeed, inconsistent and contradictory alternatives abound. In fact, such genetic diversity is necessary for the evolutionary process to succeed. Significantly, the solutions created by evolution and natural selection almost always differ from those created by conventional methods of artificial intelligence and machine learning in one very important respect. Evolved solutions are not brittle; they are usually able to grapple with the perpetual novelty of real environments.

Similarly, genetic programming is not guided by the inference methods of formal logic in its search for a computer program to solve a given problem. When the goal is the automatic creation of computer programs, all of our experience has led us to conclude that the non-logical approaches used in the invention process and in natural evolution are far more fruitful than the logic-driven and knowledge-based principles of conventional artificial intelligence. In short, "logic considered harmful."

Seventh, the biological metaphor underlying genetic programming is very different from the underpinnings of all other techniques that have previously been tried in pursuit of the goal of automatically creating computer programs. Many computer scientists and mathematicians are baffled by the suggestion that biology might be relevant to their fields. In contrast, we do not view biology as an unlikely well from which to draw a solution to the challenge of getting a computer to solve a problem without explicitly programming it. Quite the contrary-we view biology as a most likely source. Indeed, genetic programming is based on the only method that has ever produced intelligence-the time-tested method of evolution and natural selection. As Stanislaw Ulam said in his l976 autobiography (1991),

"[Ask] not what mathematics can do for biology, but what biology can do for mathematics."

The notion that artificial intelligence may be realized by using a biological approach is, of course, not new. Turing made the connection between searches and the challenge of getting a computer to solve a problem without explicitly programming it in his 1948 essay "Intelligent Machines" (Ince 1992). (Longer versions of the quotations below appear in Section 2.4.1.)

"Further research into intelligence of machinery will probably be very greatly concerned with 'searches' " . . .

Turing then identified three broad approaches by which search might be used to automatically create an intelligent computer program.

One approach that Turing identified is a search through the space of integers representing candidate computer programs. Another approach is the "cultural search," which relies on knowledge and expertise acquired over a period of years from others (akin to present-day knowledge-based systems). The third approach that Turing specifically identified is "genetical or evolutionary search." Turing said

"There is the genetical or evolutionary search by which a combination of genes is looked for, the criterion being the survival value. The remarkable success of this search confirms to some extent the idea that intellectual activity consists mainly of various kinds of search."

Turing did not specify in this essay how to conduct a "genetical or evolutionary search" for a computer program. However, his 1950 paper "Computing Machinery and Intelligence" suggested how natural selection and evolution might be incorporated into the search for intelligent machines.

"We cannot expect to find a good child-machine at the first attempt. One must experiment with teaching one such machine and see how well it learns. One can then try another and see if it is better or worse. There is an obvious connection between this process and evolution, by the identifications

"Structure of the child machine = Hereditary material

"Changes of the child machine = Mutations

"Natural selection = Judgment of the experimenter"

Our work in genetic programming confirms Turing's view that there is indeed a "connection" between machine intelligence and evolution by describing our implementation of Turing's third way to achieve machine intelligence.


Additional information on genetic programming can be found in the book Genetic Programming III: Darwinian Invention and Problem Solving and its accompanying videotape. Visit www.genetic-programming.org for citations and links to numerous other authored books, edited collections of papers, conference proceedings, and individual published papers concerning genetic programming..


· The home page of Genetic Programming Inc. at www.genetic-programming.com.

· For information about the field of genetic programming in general, visit www.genetic-programming.org

· The home page of John R. Koza at Genetic Programming Inc. (including online versions of most papers) and the home page of John R. Koza at Stanford University

· Information about the 1992 book Genetic Programming: On the Programming of Computers by Means of Natural Selection, the 1994 book Genetic Programming II: Automatic Discovery of Reusable Programs, the 1999 book Genetic Programming III: Darwinian Invention and Problem Solving, and the 2003 book Genetic Programming IV: Routine Human-Competitive Machine Intelligence. Click here to read chapter 1 of Genetic Programming IV book in PDF format.

· For information on 3,198 papers (many on-line) on genetic programming (as of June 27, 2003) by over 900 authors, see William Langdon’s bibliography on genetic programming.

· For information on the Genetic Programming and Evolvable Machines journal published by Kluwer Academic Publishers

· For information on the Genetic Programming book series from Kluwer Academic Publishers, see the Call For Book Proposals

· For information about the annual Genetic and Evolutionary Computation (GECCO) conference (which includes the annual GP conference) to be held on June 26–30, 2004 (Saturday – Wednesday) in Seattle and its sponsoring organization, the International Society for Genetic and Evolutionary Computation (ISGEC). For information about the annual NASA/DoD Conference on Evolvable Hardware Conference (EH) to be held on June 24-26 (Thursday-Saturday), 2004 in Seattle. For information about the annual Euro-Genetic-Programming Conference to be held on April 5-7, 2004 (Monday – Wednesday) at the University of Coimbra in Coimbra Portugal.


Last updated on August 27, 2003