Definition of “Human-Competitive”

There are now 36 instances where genetic programming has produced a human-competitive result. These human-competitive results include 15 instances where genetic programming has created an entity that either infringes or duplicates the functionality of a previously patented 20th-century invention, 6 instances where genetic programming has done the same with respect to a 21st-centry invention, and 2 instances where genetic programming has created a patentable new invention. These human-competitive results come from the fields of computational molecular biology, cellular automata, sorting networks, and the synthesis of the design of both the topology and component sizing for complex structures, such as analog electrical circuits, controllers, and antenna.

What do we mean when we say that an automatically created solution to a problem is competitive with human-produced results?

In attempting to evaluate an automated problem-solving method, the question arises as to whether there is any real substance to the demonstrative problems that are published in connection with the method. Demonstrative problems in the fields of artificial intelligence and machine learning are often contrived toy problems that circulate exclusively inside academic groups that study a particular methodology. These problems typically have little relevance to any issues pursued by any scientist or engineer outside the fields of artificial intelligence and machine learning.

We say that an automatically created result is “human-competitive” if it satisfies one or more of the eight criteria below.

(A) The result was patented as an invention in the past, is an improvement over a patented invention, or would qualify today as a patentable new invention.

(B) The result is equal to or better than a result that was accepted as a new scientific result at the time when it was published in a peer-reviewed scientific journal.

(C) The result is equal to or better than a result that was placed into a database or archive of results maintained by an internationally recognized panel of scientific experts.

(D) The result is publishable in its own right as a new scientific result ¾ independent of the fact that the result was mechanically created.

(E) The result is equal to or better than the most recent human-created solution to a long-standing problem for which there has been a succession of increasingly better human-created solutions.

(F) The result is equal to or better than a result that was considered an achievement in its field at the time it was first discovered.

(G) The result solves a problem of indisputable difficulty in its field.

(H) The result holds its own or wins a regulated competition involving human contestants (in the form of either live human players or human-written computer programs).

These eight criteria have the desirable attribute of being at arms-length from the fields of artificial intelligence, machine learning, and genetic programming. That is, each criteria requires a result that stands on its own merit ¾ not on the fact that the result was mechanically produced. In particular, a result cannot acquire the rating of “human-competitive” merely because it is considered “interesting” by researchers inside the specialized fields that are attempting to create machine intelligence. Instead, a result produced by an automated method must earn the rating of “human-competitive” independent of the fact that it was generated by an automated method. These eight criteria are discussed in detail in Genetic Programming III: Darwinian Invention and Problem Solving (Koza, Bennett, Andre, and Keane 1999) and in Genetic Programming IV: Routine Human-Competitive Machine Intelligence (Koza, Keane, Streeter, Mydlowec, Yu, and Lanza 2003).

Certainly, proof of principle ("toy") problems are occasionally useful for tutorial or introductory purposes. However, we believe that (after 50 years) it is time for fields of artificial intelligence and machine learning to start delivering non-trivial results that satisfy the test of being competitive with human performance.


· The home page of Genetic Programming Inc. at www.genetic-programming.com.

· For information about the field of genetic programming and the field of genetic and evolutionary computation, visit www.genetic-programming.org

· The home page of John R. Koza at Genetic Programming Inc. (including online versions of most published papers) and the home page of John R. Koza at Stanford University

· For information about John Koza’s course on genetic algorithms and genetic programming at Stanford University

· Information about the 1992 book Genetic Programming: On the Programming of Computers by Means of Natural Selection, the 1994 book Genetic Programming II: Automatic Discovery of Reusable Programs, the 1999 book Genetic Programming III: Darwinian Invention and Problem Solving, and the 2003 book Genetic Programming IV: Routine Human-Competitive Machine Intelligence. Click here to read chapter 1 of Genetic Programming IV book in PDF format.

· 3,440 published papers on genetic programming (as of November 28, 2003) in a searchable bibliography (with many on-line versions of papers) by over 880 authors maintained by William Langdon’s and Steven M. Gustafson.

· For information on the Genetic Programming and Evolvable Machines journal published by Kluwer Academic Publishers

· For information on the Genetic Programming book series from Kluwer Academic Publishers, see the Call For Book Proposals

· For information about the annual Genetic and Evolutionary Computation (GECCO) conference (which includes the annual GP conference) to be held on June 26–30, 2004 (Saturday – Wednesday) in Seattle and its sponsoring organization, the International Society for Genetic and Evolutionary Computation (ISGEC). For information about the annual Euro-Genetic-Programming Conference to be held on April 5-7, 2004 (Monday – Wednesday) at the University of Coimbra in Coimbra Portugal. For information about the 2003 and 2004 Genetic Programming Theory and Practice (GPTP) workshops held at the University of Michigan in Ann Arbor. For information about Asia-Pacific Workshop on Genetic Programming (ASPGP03) held in Canberra, Australia on December 8, 2003. For information about the annual NASA/DoD Conference on Evolvable Hardware Conference (EH) to be held on June 24-26 (Thursday-Saturday), 2004 in Seattle.


Last updated on December 27, 2003