This week’s BEACON Researchers at Work post is by MSU undergraduate student Jacob Walker.
In the fall of my freshman year of college I joined a laboratory that would define my undergraduate career. I became an assistant for the Evolving Intelligence Group under Dr. Robert Pennock. My first task was to take the digital organisms evolved in Avida, an artificial life platform, and place their digital DNA into physical robots to observe their behavior. Artificial life is an interdisciplinary field which investigates life in artificial environments, either through simulation in computers or through biochemistry. Avida is a particular software program used among many BEACON participants where digital organisms similar to bacteria reproduce and evolve in virtual Petri dish. My research experience in the lab and subsequently in BEACON has revealed to me how biology can inform artificial intelligence. Many of the solutions that these organisms evolved to solve tasks have captivated me with their ingenuity, leading me to pursue a research career in Computer Science. I am currently pursuing three B.S degrees in Computer Science, Mathematics, and Economics at Michigan State University.
Organism DNA in Avida consists of a list of computer instructions. Avidians are in essence computer programs embedded in a flat grid, performing clean, distinct steps to perform an action. This program has proved extremely valuable to the study of digital and biological evolution. However, I asked myself a question. How could artificial life extend to environments that are physically realistic? In the physical world, life must deal with sensory input that is constantly, smoothly changing. Complex vision, touch, and motor control use signals that simply cannot be reduced to a series of whole numbers. Additionally, animals’ nervous systems are linked to their bodies. Brains receive inputs, process information, and send output in terms of the bodies in which they are embedded. There is no machinery in the human brain to process the eight legs of a spider, nor could a dog brain manipulate human-like fingers. Sometimes the structure of the body itself can carry out implicit computation. For example, with robotic walking legs, the weight distribution of the robot’s body is essential for successful motion.
There have been many artificial life researchers before me which have asked this question. One prominent pioneer is Karl Sims, who in 1994 revolutionized the field with his block-creatures. Sims co-evolved organisms’ brains and bodies, giving these creatures many degrees of freedom. Organisms often evolved into creatures that had a striking resemblance to those in the real world. Cubic fish, sea snakes and even a turtle emerged from experiments selecting for swimming ability. He also bred creatures to chase lights in arbitrary directions, steal cubes, and jump in the air.
Karl Sims’s work spawned a multitude of other experiments involving artificial life in simulated physical environments. However, there has been no software platform in the field equivalent to Avida in terms of its versatility, maturity, and power. Thankfully, my research ambitions were saved by a graduate student of Dr. Chris Adami, Nicolas Chaumont. Nicolas was working on a new, powerful artificial life program called EVO that evolved block-creatures in a manner similar to the experiments of Karl Sims.
I now had the means to conduct some artificial life experiments in a three-dimensional environment. I first attempted to repeat some of the exact experiments of Karl Sims with positive results. My creatures evolved for swimming exhibited bodies similar to tadpoles and sea snakes. Although it took some additional time and effort, I was also able to breed organisms that were able to locate and swim towards light sources in any part of their virtual ocean. My work did not stop here, however.
Karl Sims and later researchers described the emergence of these structured, intelligent organisms, but few if any have tried to understand how these block-creatures work. What computations are performed to achieve these behaviors? How much does the “brain” of the organism depend on the body’s particular shape and size? I dared to venture into the byzantine neural networks of these evolved organisms and found that the solutions these creatures used were surprisingly–if not ingeniously–simple. Some of these organisms had 30 to 40 neurons performing complex mathematical functions, but only 2 to 3 actually generated the behavior of the organism. The rest were simply nonfunctional material, useful not directly to the organism.
It appears that evolutionary processes can yield solutions that defy human intuition. This was definitely the case with my evolved light-chasing organisms. Organisms are able to “see” a light only based on three numbers that encode the direction of the light relative to the organism. The first value indicates if the light is in front or behind, the second right or left, and the last above or below. One would expect that an organism would check all these values while searching for the light. In reality, organisms evolved controllers that checked only one of these values. This is all that they need to find the light. How are they able to search for a light in any direction? They twist. They all feature a constant, random twist movement in their behavior. Through this random twisting motion, they are able eventually to face the light and swim towards it. This way, the question for the organism becomes much simpler: “Is the light in front of me or not?” If not, keep twisting. If so, then move forward. In short, I believe that intelligent artificial life has the potential to teach us both how biological life works and how intelligence may be implemented in machines.
For more information about Jacob’s work, you can contact him at walke434 at msu dot edu.