Skip to Content
Author's profile photo Former Member

Evolutionary Algorithms: An introduction

The big picture

With the term Adaptive Computing a concise description for systems with the ability to adapt themselves to changes requirements by self-modifying their behaviour was coined. This definition just as an introduction. Don’t include it into your encyclopedia. How to come to Evolutionary Algorithms (EA)? EA’s are a subdivision of the so-called Artifical Intelligence (AI). Evolutionary algorithms develop solutions based on a set of available modifiers (instructions), inputs and constraints. They sort of iterate thru an unknown field of potential solutions and find a near-optimal one. Mostly, the solution is suboptimal, but as close to the theoretical optimum so that the developed solution could be accepted and could be used in practice.


Well known is the god-father of genetics, Charles Darwin. Genetic Algorithms, based on Darwinian ideas, Genetic Programming – which is an offspring of Genetic Algorithms and which has been develop by John Koza, and other evolutionary approaches form the subject of Evolutionary Algorithms.
Not to forget Mendel’s laws of inheritance (one could easily intermingle with object-orientation). They allow for the calculation of new generations based on previously developed genes.


When to use the EA approach?
We could easily answer: Whenever it is possible to identify and implement the basic ingredients of an EA. Most important: You need a fitness function. Next, you need a representation for entities embodying possible solutions. If you are able to do so, proceed with EA’s. It is easier to see the applicability of the approach by picking a family member of EA – like Genetic Algorithms – and do an analysis for that. With that, we avoid the problem of working on a very abstract, because meta layer. Future considerations will be necessary beyond this weblog entry.

The Evolutionary Approach

The term EA is describing a concept, something abstract, on which to found a concrete concept. It could be seen as a meta-description of other concepts. It builds the frame for the family members GA, GP and other evolutionary techniques.

Main ideas

The main ideas are:

  • Find a solution within a very huge, perhaps nearly infinite search space.
  • Begin with a randomly chosen population, each population consisting of independent individuals.
  • Each individual needs to be interpreted regarding to its current state (i.e., its current representation).
  • The main point with EA’s is to find a fitness function allowing to measure the quality of an individual with respect to solving the given problem.
  • Based on the quality of each individual, new populations, consisting of new individuals, can be generated automatically.
  • The way new individuals will be generated can be controlled by means of randomly influenced parameters.
  • Genetic operations are mainly based on Darwin’s and Mendel’s research.
Executing an Evolutionary Algorithm

It is straightforward implementing an EA. Just perform the following steps in the given order (also see Evolutionary Algorithm Web Service):

  1. Initialization: Just generate some individuals with random (but at best valid) states
  2. Evaluation: Use the fitness function to give each individual a rank.
  3. Selection: Choose your elite and kick the rest from the population.
  4. Recombination: Add some controlled randomness by applying genetic operators of your choice.
  5. Do the above until you are satisfied with the currently best solution or until your hard disk crashes…

Lead Out

This post only described EA’s from a top-point view. Further entries will dig into detail with concrete approaches within EA’s. With the stuff to come the broad variety of applicability of EA’s will be shown and proven. From my point of view, EA’s are a clever way of solving several classes of problems without needing to know the gestalt of the solution exactly.

Assigned tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Former Member
      Former Member
      Just remember that there is no free lunch. On average over all possible search spaces, every search algorithm (which is what EA is) will perform equally well.
      Author's profile photo Former Member
      Former Member
      Blog Post Author
      Well, evolutionary algorithms normally shorten the time needed to find a proper solution because they adapt to nature's procedure weighting better solutions higher and using them to develop offsprings. So it will not be necessary to averagely search thru half of the search space as this is the case for plain search algorithms.
      Just think of how long it would have takem to develop a complex being as the human organism by trying out all possible solutions...
      Author's profile photo Former Member
      Former Member
      Have a look at the paper "On futility of blind search"

      In choosing an evolutionary algorithm you must have some properties/restrictions on the search space in order for it to be better than random search. Typically you assume that small changes in one parameter will not make drastical changes to the "cost/performance" function.

      Author's profile photo Former Member
      Former Member
      Blog Post Author
      After sort of reading the paper provided with a link by you, let me state these remarks:

      On page 14, the autor makes the conclusion that GA claims to equal NP with P-problem classes. As he later writes, a GA is based - upon other - principles on approximation.

      In the essay it sometimes seems to me that there is a mixup of the problems with Darwin's theory reflected in Nature and natural evolution and with the principles of Evolutionary Algorithms (like Genetic Algorithms). There is no claim that Darwin's theory is correct. No human theory is correct up to now. Quantum theory is the best approximation of reality but far from being perfect. With computer programs things are more formal and thus easier to handle.

      As I experienced, with GAs and GPs it is often very easy to formally state a problem, in contrast to other paradigms.

      There needs to be an awareness of the existence of Genetic Algorithms in general and blind search using principles of Genetic Algorithms particularly. How do you know when a blind search has produced a reasonable solution? Well, you provide a fitness function. And that is one of EA's principles. So we could agree that a blind search having a comparable fitness function as a GA, e.g., will have the same overall performance. In practice, the performance of a GA will be much better than blind search for several problem classes, as one can easily experience.
      Furthermore, a mathematical equvialence like O(n), e.g., doesn't mean that in practice the user has to wait the same time until an algorithms gives him useful results. Mathematically there might be no difference - regarding the O-notion - between 70 and 80 seconds average time to accomplishment, but for the user there is.

      And don't forget that a GA gives you "hintful" results on its way to a suboptimal or optimal solution, whereas a blind search probably only returns one useful results and stops then.



      Author's profile photo Former Member
      Former Member
      Without a proper fitness function the approach is of little use. Hence the problem that you are trying to solve must be well known.

      For not so well known problems there is however the possibility to use a "training data set" to automatically extract kind of a fitness function, but then you need a data set that is representative and large enough to cover the entire "solution space" well. This may not always be the case.

      My experience is that this approach takes a long time to do and the result that you get may not be as good as what you have hoped for.

      Kind regards, Anders Aspnäs...