BEACON Researchers at Work: Evolving division of labor

This week’s BEACON Researchers at Work post is by MSU graduate student Anya Johnson.

anyajohnsonHave you ever looked around you and thought about the amazing feats that organisms accomplish together? The most obvious examples are of course everything that humans have made, but if you look closer there are a lot of species that work together to master their habitat. Eusocial insects (like ants and termites) make incredible structures to live in. I’m interested in how organisms take that large task that needs to be accomplished (the entire termite mound) and break it into small pieces (building just this part of the mound). While humans can easily discuss who is going to do what, many groups of animals have evolved their own strategies to break apart a large task. This process of breaking a problem down into smaller (and easier) pieces can be called problem decomposition.

Photo by Steve Corey: http://www.flickr.com/photos/stevecorey/7281531296/

Photo by Steve Corey: http://www.flickr.com/photos/stevecorey/7281531296/

I am currently using the digital evolution platform Avida to study what processes contribute to the evolution of problem decomposition and how computer scientists could harness problem decomposition to solve very difficult problems in computer science. I’m focusing on how groups of organisms in Avida can coordinate to break complex tasks down into smaller pieces that can be solved by individual organisms. In Avida, digital organisms can perform tasks, such as math problems (6A + 2B) or logic operations (NOT, OR), to get rewards. I’ve set it up so that a group of organisms has to work together to solve a certain number of these tasks before the group as a whole gets to reproduce. This group reproduction dynamic is a lot like a simplified version of an ant colony. Most of the ant colony focuses on working instead of reproducing. Since the queen is the mother of (most) of the workers, the whole colony is closely related. The colony passes on its genes when a new queen is born and goes off to start a new colony. In my experiments, when the group has performed enough tasks, a new group is started from the genetic material of the successful group. Because there are only so many places a group can live, eventually new groups start killing off the old groups. This creates competition between groups of organisms to reproduce quickly in order to keep their lineage alive.

Previously, Heather Goldsby and her colleagues discovered that under environmental conditions that favor division of labor, organisms within groups would communicate solutions to simple problems that could be used as the building blocks to more complex problems. Specifically, for this work, they used a suite of nine logic tasks of increasing complexity., It was faster for a couple organisms to solve the simplest tasks and send those answers to another organism. When that other organism received the messages, it was able to combine the simple answers into a more complex solution. In this way, the group broke down the most complex task into the simpler tasks. In fact, Goldsby et. al found that when the organism that could solve the most complex task was taken out of the group, it no longer could do the task because it relied on the messages from the other organisms.

This was a very exciting thing to discover because evolutionary computation approaches to problem decomposition have been challenging to engineer both within Avida and other systems. We wanted to know exactly what aspects of Goldsby et. al’s project were necessary for problem decomposition to evolve. To figure this out, I started changing different parts of the environment that Goldsby et. al used.

The organisms were all being given the same three inputs to their logic tasks, and we hypothesized that this was important for the evolution of problem decomposition. This is because if the organisms were given different inputs, they wouldn’t be solving the same problems. It’d be pretty hard to combine answers when you aren’t solving the same problem, right? The organisms were also being rewarded when they sent a message that solved a task. Organisms could basically kill two birds with one stone: get more tasks counted as solved for the group’s reproduction and pass on an answer to another organism. We hypothesized that if they stopped getting rewarded for solutions in messages, it was less likely that the organisms would evolve to use messaging and thus problem decomposition would have a harder time evolving.

We were also interested in seeing if the evolution of problem decomposition in this setup would be influenced by the type of tasks the organisms were doing. We started with the basic “logic nine” tasks, which are nine logic operations that build on each other. What if the organisms had to solve math problems instead? We would hope that this method of problem decomposition is robust to the type of problems the organisms are solving, since we want to use it for all sorts of problems. To test how robust this system is, we’ve started testing the Avidians on a set of nine math problems that build on each other (like A+B and 2A+3B). We’re also testing them on the first ten Fibonacci numbers. Those tasks just require the organisms to output one of the first ten Fibonacci numbers (starting at 0). The Fibonacci numbers also build on themselves because it would be easy for the organisms to send two of the beginning numbers to another organism that could add them together to get the next number.

All of these experiments with these groups of organisms is heading toward an ultimate goal of making a problem decomposition system that can automatically decompose any problem given to it. Parallel programming is when a computer program is broken into pieces that can run on different machines (or cores within one computer) in parallel. This makes the program run much faster than if each of those pieces had to be run one after another. Parallel programming is difficult for programmers sometimes because it can be hard to figure out how to break down the program. However, parallel programming basically a problem decomposition task, since the serial program (non-parallelized) just needs to be broken down into subparts and then recombined to solve the most complex task (i.e. the whole program). If we can put the program into a task for the organisms, hopefully, we’ll have a system that evolves a group of organisms that can break that program into its simpler parts. Evolution has a history of finding solutions to problems that humans didn’t think of, and who knows what these organisms will discover about how to parallelize programs. 

For more information about Anya’s work, you can contact her at anyaejo at gmail dot com.

About Danielle Whittaker

Danielle J. Whittaker, Ph.D. Managing Director of BEACON
This entry was posted in BEACON Researchers at Work and tagged , , , . Bookmark the permalink.

Comments are closed.