BEACON Researchers at Work: Making synthetic viruses to study evolution

This week’s BEACON Researchers at Work blog post is by University of Idaho postdoc Martina Ederer.

After working on many different projects in a number of different labs, I always seem to come back to the study of evolution. Hence, I was very excited about joining the Wichman lab at the University of Idaho (for the second time) about a year ago.

Our research in the lab revolves around genome evolution, that is, how the genetic code or DNA of an organism changes over time and what the consequences of these changes are with respect to the fitness of the organism. We measure fitness in terms of how fast the organism reproduces. We work with is phiX174, a small virus that infects and kills the bacterium E. coli. This small organism is very handy to study evolution because it has a short generation time and is easy to grow in the lab. For example, if we start out with 1,000 viruses and let them grow for 30 min. we end up with 1,000,000 viruses. PhiX174 does not encode all of the tools it needs to replicate, so it uses E. coli to make its DNA and proteins.  Some of these new viruses have ‘mistakes’ (mutations) in their DNA because the copying of the DNA is not always 100% accurate. Most of these changes are ‘bad’ for the organism, reducing its fitness and maybe even causing death, but  a few of these changes are ‘good’ for the organism and allow it reproduce faster in the given environment than the ancestor, measured as an increase in fitness.

Now we can study multiple environments (i.e. increase or decrease temperature, provide a new host, etc.) and determine which mutants are best adapted and have a better fitness than the ancestor. Such mutations rapidly dominate the population. Since we have grown this virus in our lab for many years and have exposed it to many different growth conditions, we have quite a collection of mutants in the lab that are adapted to new growth conditions. For example, we have a number of mutants that can infect a new host, a particular strain of the bacterium Salmonella. These mutants all have three or more changes in their genomes. Now, since these mutants arose ‘naturally’ some of these changes may not be necessary for growth on the novel host, but just hitchhiked along, because they happened to not negatively affect the virus’s fitness. We are interested in finding out which of the changes/mutation allow the virus to grow on the new host and which ones are only coincidental changes that do not have any impact on the fitness of the organism on this host.

Diagram explaining the novel cassette-assembly system for phiX174.

Novel cassette-assembly system for phiX174.

To do this we constructed a ‘cassette system’ that allows us to assemble the genome of the virus from14 different DNA pieces in vitro (in a test tube) and then use it to infect the E. coli host. This allows us to construct viruses with each single mutation as well as all possible combinations for a given pathway. For example, we have isolated a mutant virus from previous experiments that are able to grow on Salmonella as a new host in addition to the old host, E. coli. We identified three changes (A, B, and C) that we think are important for growth on the new host. Using this novel cassette system, we constructed viruses with the single mutations X,Y, or Z; the double changes XY, XZ, or YZ; and the triple XYZ. Now we can test the phenotype of each mutant virus, for example how fast, if at all, each one will grow on the new host, and which changes allow the virus to bind to the new host cell. These studies, when interpreted in the ecological context of the initial experiments, may lead to a set of general rules for the evolution of host switching of pathogenic viruses important for human health. Just think of the emergence of diseases caused by HIV, Ebola and SARS, all viruses that switched from its native host to humans.

Another, equally exciting application of this cassette virus assembly system is that we can use the system to ‘create’ viruses with a designed genome. Our latest endeavor is the recreation of the ancestral phiX-like virus. When we isolate phiX-like viruses from the environment, we find that they differ from each other at many nucleotides (building blocks). This is what we would expect since, as I mentioned above, the copying of the genome introduces mistakes that can lead to a virus better adapted to a given environment. We looked at a number of different genomes, aligned them to determine the changes that they have accumulated and used a computer program to reconstruct the sequence of the ancestral virus that is a virus that lived in the past and gave rise to the current diversity of phiX-like viruses, but is not part of the current population. Now we are in the process of assembling this ‘dinosaur’ phage and we will study its fitness and various phenotypes.  Further, we can evolve it in the lab under different conditions and monitor how its genome changes.  Will it evolve to be more like our present day laboratory-adapted phiX174?

Take a look at some artistic renditions of this beautiful small organism, phiX174.

Photograph of necklace representing the genome of phiX174

1977 Genome (2007). By Holly Wichman, 7” x 7”, Strung Swarovski crystals, Bali silver beads, sterling clasp, sterling spacers, wire. This necklace represents the genome organization of phiX174 – the first genome to be fully sequenced, published in Science in 1977. Each of the eleven genes is shown in a different color; black crystals are spacers. Intergenic regions are in silver. The necklace shows the overlapping reading frames, observed for the first time in this virus.

Photograph of beaded art piece called "Counting Plates"

"Counting Plates," by H. Wichman / photo credit John Brunsfeld and Darin Rokyt

For more information about Martina’s work, you can email her at mederer at uidaho dot edu.

Posted in BEACON Researchers at Work | Tagged , , , , , | Leave a comment

BEACON Researchers at Work: Experimental co-evolution in a virus and its host

A bonus Wednesday blog post! Today’s BEACON Researchers at Work post is by MSU graduate student Justin Meyer.

Justin MeyerFor as long as I can remember I have been fascinated by the natural world. Whether it is the chameleon with its capacity to change colors, ants’ collective behavior, or virus’s ability to highjack the machinery of cells, the great variety of organisms alive has inspired my curiosity. As I became more interested in biology I learned that despite the complexity of life forms, there were physical processes such as evolution by natural selection that are responsible for producing them. With this awareness I realized that there was in fact reason behind the seeming chaos of a jungle or an eclectic coral reef. From then on my interests in the natural world extended beyond learning about organisms, but how evolutionary processes such as natural selection shaped them.

Today I study the evolution of a particular group of organisms, viruses, and how they evolve to exploit new hosts. To do this I study how one particular virus, called phage λ (“lambda”), evolves to exploit new genotypes of its host, the bacterium Escherichia coli. I perform my studies by co-culturing the virus and its host in the lab and observing how the virus adapts by genetic mutations to better exploit its host. Since viruses have a short generation time and high mutation rate, I am able to watch the evolutionary process occur over days and weeks rather than the millennia that would be required for long-lived organisms. I find this research particularly rewarding because I am able to watch the hidden evolutionary processes that shape the world’s biodiversity.

One of the most interesting results I have found is when I observed λ evolve to exploit a new receptor on the outer membrane of E. coli. Receptors are proteins or other cellular features viruses use to recognize their host cells as opposed to other cells that they cannot exploit. λ also uses the receptor to bind to the cell and inject its DNA into the host. Once the viral DNA enters the cell, it tricks the host into making more viruses rather than to grow itself. Eventually the cell becomes overrun by viruses, explodes, and releases the new virions. Receptors can be viewed as viruses’ gateway into their hosts and have a major role in determining what species they infect. Therefore evolving to exploit a new receptor is an important event in the evolution of viruses.

I am interested in studying this process for a number of reasons: First, explaining how organisms evolve novel functions daunted even Darwin and remains a relatively unexplained phenomenon today. I hope that my research will fill this unexplored part of evolutionary biology. Secondly, these transitions mark important events in the evolution of disease and by studying them we may be able to design techniques to predict, monitor, or stifle the emergence of future diseases. Lastly, learning the ecological and evolutionary pressures that drive the evolution of novel functions could have bioengineering applications, such as engineering viruses to deliver genetic medicine or bacteria to remediate pollution.

The process by which λ evolved the new function was pretty fascinating. First, the event that triggered λ’s evolution was not an evolutionary change in λ itself, but was a change in its host. E. coli evolved resistance to λ infection by turning off production of the receptor protein, named LamB. This was accomplished through mutations in a gene, malT, that up-regulates LamB. By knocking out the receptor, the E. coli gains complete immunity to the virus. Fortunately for λ, the mutation is not perfect and occasionally an unlucky cell expresses LamB and becomes infected by λ. The virus is therefore able to reproduce and maintain a modest population, however it experiences pressure to improve on the resistant E. coli and regain its abundance. At this stage the virus acquires mutations that improve its binding to the very rare LamB molecules. These mutations are thought to make the virus less picky so that if it encounters a LamB molecule, it does not miss the opportunity to attach to it and infect a cell. Eventually a combination of four mutations evolves that together confer the ability of λ to exploit a new outer-membrane protein, OmpF, in addition to LamB. Interestingly, this sequence only happens in a quarter of the experimental trials. For the other three quarters of the time λ remains reliant on LamB because the host cells evolve a second round of resistant mutations that block the virus from infecting even if it acquires the ability to target OmpF.

Diagram showing process of evolution in host and phage

There are two notable findings from this work. The first is that mutations evolved to improve an ancestral function can be co-opted for a new function. This observation shows that evolving new functions may be easier than often assumed if old parts can be put together to create new innovations. Secondly, whether λ evolves this key innovation is not dependent on the evolutionary path it takes, but instead is dependent on the host. One set of mutations in the host promotes the viral evolution while another set stifles it. This adds an interesting twist to Stephen J. Gould’s thought experiment on the contingent nature of the tape of life and suggests that there are ways to intervene in the process of viral host-jumps.

For more information on Justin’s work, contact him at meyerju3 at msu dot edu.

Posted in BEACON Researchers at Work | Tagged , , , , , | Leave a comment

2011 BEACON Annual Report

The 2011 Annual Report is now available, describing all the great research, education & outreach, knowledge transfer, and diversity initiatives that BEACONites have been doing over the past year!

Click here to download the PDF.

Posted in About BEACON | Tagged | Leave a comment

BEACON Researchers at Work: Preventing accidents with evolutionary computation

This week’s BEACON Researchers at Work post is by MSU graduate student Andres Ramirez.

Andres RamirezRecently, I found myself driving on the wrong side of the road. No, I did not fall asleep. I drove through some parts of New Zealand, where the custom is to drive on the left side of the road. While this experience was exciting, it was also awkward, as I have always driven on the right side of the road. Although we adapted to these new driving conditions, would it not have been great if the vehicle had driven itself instead? In the near future, this might actually become a reality. Specifically, an intelligent vehicle system (IVS) performs adaptive cruise control, lane keeping, and collision avoidance features. As such, an IVS is intended to provide autonomous navigation to facilitate the safe and efficient transportation of passengers across major roadways.

As it turns out, an IVS is just an instance of a more general type of application known as a dynamically adaptive system (DAS). In particular, a DAS uses its sensors to first measure properties about itself and its execution environment at run time. This monitoring information enables the DAS to identify when it should self-adapt in response to changes in its environment. If an adaptation is necessary, then the DAS determines when and how to change the structure and behavior of the application in order to continue satisfying its client’s objectives. Though this is a rather simplistic description of a DAS, designing and implementing a DAS is actually an extremely difficult task.

A key challenge in successfully engineering a DAS is being able to anticipate conditions that might warrant adaptation at run time. This challenge arises because of uncertainty in both the execution environment and the DAS itself. Specifically, it is often infeasible, sometimes even impossible, for a human designer to identify all possible combinations of environmental inputs that a DAS will encounter throughout its lifetime. For instance, humans can interact with a DAS in unpredictable and undesirable ways. Similarly, the monitoring information that a DAS analyzes to detect conditions that warrant adaptation is only “as good” as the sensors it uses to collect that data. These sensors, however, can be imprecise, inaccurate, and unreliable. This uncertainty about what the DAS perceives about its environment can severely limit the adaptation capabilities of the DAS.

To address these concerns, we designed and implemented Loki, an evolutionary computation technique that can automatically discover combinations of system and environmental conditions that prevent a DAS from satisfying its objectives and requirements. Loki’s primary objective is to alter how a DAS perceives its environment at run time such that it self-adapts in undesirable ways. In contrast to other approaches and techniques for evaluating how a DAS responds to different system and environmental stimuli, Loki leverages evolutionary computation techniques to discover interesting combinations of system and environmental conditions that produce undesirable behaviors in a DAS. In particular, Loki is capable of discovering both requirements violations and latent behaviors (unknown behaviors). While a requirements violation prevents the satisfaction of a given design-level objective, a latent behavior manages to satisfy requirements while introducing previously unknown and potentially undesirable behaviors. To achieve these objectives, Loki applies the concept of novelty search to generate environmental conditions that produce the most distinct behaviors in a DAS from those already examined. A key benefit of applying novelty search is that it enables Loki to generalize many behaviors into a more manageable set of representative behaviors that a requirements engineer can manually inspect.

Diagram of car, sensors, and environment implemented in WebotsWe applied Loki to a simulated IVS prototype that we implemented in the Webots simulation platform. The results obtained thus far have been positive and encouraging.  When compared with other testing techniques, such as randomized testing, Loki managed to discover a significantly larger quantity of different requirements violations and latent behaviors. For instance, Loki discovered a set of system and environmental conditions that prevented the IVS from accurately computing its current velocity, thereby producing a collision with another vehicle in front of the IVS. Similarly, Loki also discovered a slightly different set of system and environmental conditions that produced a more complex interaction between the adaptive cruise control and lane keeping requirements. Specifically, after the IVS collided with the vehicle in front of it, as in the previous example, it then departed from its lane and, in an attempt to re-satisfy its lane-keeping objectives, the IVS side-swiped the same vehicle several times as it steered towards its original lane.

Applying Loki enables a requirements engineer to analyze the set of system and environmental conditions that produce different kinds of undesirable behaviors in a DAS. This information can guide the revision of either the requirements or the design of the DAS in order to disallow these undesirable behaviors. For instance, in our IVS case study, we identified a set of system and environmental conditions that frequently affected distance sensors in the IVS.  Without an accurate estimate of the distance between the IVS and another vehicle in front, the adaptive cruise control module in the IVS might be unable to prevent a collision. Based on this information, we revised the process that the IVS uses to compute its distance to vehicles in front such that the computation is more robust to sensor noise and failures.

In the future, we will investigate how to combine Loki with a probabilistic framework for evaluating the partial satisfaction of requirements. In addition, we will also explore how to leverage the behaviors discovered by Loki to automatically refine a requirements model of the DAS. Perhaps driving through New Zealand will be easier next time?

For more information about Andres’ work, you can contact him at ramir105 at cse dot msu dot edu.

Posted in BEACON Researchers at Work | Tagged , , , , | Leave a comment

BEACON Researchers at Work: Video game control with evolved neural networks

This week’s BEACON Researchers at Work blog post is by UT Austin graduate student Jacob Schrum.

I often find myself running wildly through the darkened corridors of some decommissioned mining facility, rocket launcher in hand, leaping madly about the hostile arena while trying to dodge bolts of lightning and hot shards of shrapnel directed at me by my enemies. As these and other projectiles bite into my skin, I feel myself weakening, and decide to flee combat in search of medical aid or a shield to protect me. Once rejuvenated, I throw myself back into the fray, but must first seek out my opponents. I take advantage of the high ground to launch a surprise attack on several skirmishers, leaving only one with whom I have a protracted battle. We’re evenly matched, but luck is not on my side, so I eventually succumb. I find myself instantly resurrected and flung back into combat to repeat the process.

The First-Person Shooter video game Unreal Tournament 2004, in which agents require high-level strategic behavior combined with low-level responses in order to succeed.

This is what it feels like to play the First-Person Shooter action game Unreal Tournament: at any given moment, some high-level strategy needs to be chosen (fight, seek aid, search for opponents) before the low-level details of that strategy can be carried out (shoot, move, jump, look). My name is Jacob Schrum, and along with Risto Miikkulainen at the University of Texas at Austin, I research methods for evolving such strategic, multimodal behavior in video games like Unreal Tournament. Specifically, I evolve neural networks, which are simple models of the brain that serve as universal function approximators, and use them as control policies for agents in video games. The networks I evolve consist of artificial neurons that can be linked together in arbitrarily complex topologies by synaptic connections of varying strengths. Knowledge of how to behave is stored in the structure of the network and the strength of its connections. Such networks start out very simple, but gradually complexify throughout the course of evolution; hence this method is known as Constructive Neuroevolution. My research focuses on improving Constructive Neuroevolution methods so that they can automatically learn multiple modes of behavior in complex video games. Video games are ideal environments in which to design and test evolutionary methods, since they still contain much of the complexity of real-world environments, yet are controlled in a way that the real-world is not. Such environments can serve as stepping stones to real-world applications in robotics. However, existing learning methods in both video games and robotics tend to require humans to specify some sort of task decomposition in order for them to have a chance at solving complex problems requiring multiple modes of behavior.

Jacob Schrum presenting his work at the recent IEEE Computational Intelligence and Games conference in Seoul, South Korea.

An agent in Unreal Tournament would generally need separate modules for combat, path finding, and retreating. These modules would themselves be made up of smaller behaviors, such as an action for approaching items which could be used to approach health items while retreating, or to approach weapons and ammo while exploring the level with the path-finding module. Some modules like the combat module could even be broken down into more modules, such as a module to use when sniping opponents from a distance and a separate module for attacking opponents using rapid fire weapons. This tangle of modules becomes complicated very quickly, and therefore harder to construct manually. Learning how to break up a task into multiple subtasks automatically would spare humans the hassle of designing the hierarchy manually, and could also result in unexpected ways of breaking up the task which are actually more effective than what a human would do.

Two forms of Mode Mutation, by which traditional neural networks can gain additional output modes in order to tackle games with multiple tasks.

My approach to evolving multimodal behavior involves allowing neural networks to possess multiple output modes, ideally such that one network mode corresponds to each mode of behavior required in the target task. Such modes can be manually assigned to each task, but as mentioned above, such manual assignment requires lots of human engineering, and can often divide the domain in a manner that does not represent the most effective task assignment for learning. Therefore, networks are allowed to evolve new output modes as needed, and also have control over which mode to use at any given time, thus producing multimodal behavior without any knowledge about how to break up the domain into component tasks. Furthermore, because these “multimodal” networks share common sub-structures, information that is needed in multiple tasks can be shared across modes, which in turn accelerates learning. Such methods will lead to complex multimodal behavior various domains: from classic games such as Ms. Pac-Man, all the way to complex modern games like Unreal Tournament.

To learn more about Jacob’s work, you can contact him at schrum2 at cs dot utexas dot edu.

Posted in BEACON Researchers at Work | Tagged , , , | Leave a comment

Announcing BEACON Distinguished Postdoctoral Fellows Program

Interested in joining BEACON as a postdoc?

BEACON Distinguished Postdoctoral Fellows Program

BEACON is an NSF Science and Technology Center headquartered at Michigan State University with partners at North Carolina A&T State University, University of Idaho, University of Texas at Austin, and University of Washington. BEACON brings together biologists, computer scientists, and engineers to study evolutionary dynamics using biological and computational techniques and to apply evolutionary principles to engineering problems. We seek outstanding post-doctoral scholars to pursue interdisciplinary research on evolution in action with BEACON faculty members, in the fields of biology, computer science, and/or engineering.

Applicants will propose a research project within the scope of BEACON’s mission and must have two BEACON faculty sponsors who will serve as research mentors should the fellowship be awarded. At least one sponsor must be from the MSU faculty; the other sponsor may be from any of the five BEACON institutions. Preference is given for interdisciplinary research. The postdoc fellow will be based at Michigan State University in East Lansing. Please see our website (http://www.beacon-center.org) for information about BEACON’s mission, participants and ongoing research projects.

Applicants must submit the following, in a single PDF, to BEACON Managing Director Danielle Whittaker via email (djwhitta@msu.edu):

  1. CV
  2. A two-page description of their research plan
  3. A one-page summary of their doctoral research
  4. Letters of support from two BEACON sponsors (at least one must be from MSU)
  5. Two additional letters of recommendation [the letters can be sent separately by the recommenders: send to djwhitta at msu dot edu]

Fellowships are for two years and include a salary of $50,000/year and modest funds to support research and travel. The successful applicant will help foster collaborations among faculty and disciplines and serve as a professional model for pre-doctoral trainees.

A Ph.D. in biology, computer science, engineering or related fields is required. Current MSU graduate students or postdocs are not eligible for this fellowship. Minority applicants are especially encouraged to apply. MSU is an Equal Opportunity/Affirmative Action Employer.

The deadline for applications is December 15 of each year. Finalists will be invited to give research seminars in January/February, and the award will be announced in late February.

Posted in About BEACON, Job Openings | Tagged , | Leave a comment

BEACON Researchers at Work: Hyena Poop Patrol

This week’s BEACON Researchers at Work post is by MSU graduate student Andy Booms.

Andy Booms in AfricaFor the past few months I’ve been searching Kenyan protected areas for spotted hyenas and their poop, which I collect. Each time I arrive at a new site I take a drive around and listen for the hyena’s characteristic whoop. Once I get a rough idea of where they are hanging out I can start searching for individual animals or the dens and other areas where they are likely to poop. When I find some poop worth collecting I put on my gloves, prepare my storage tubes, snap a popsicle stick in half (the perfect tool for this job), scrape the surface of the sample with the popsicle stick to collect sloughed-off cells, and place the scrapings in a tube. Once this is done I pack up and continue the search for samples from other individuals.  The rest of the process – DNA extraction and genetic analysis – takes place back in the lab. This probably sounds like some sort of punishment, the work detail that nobody else wanted. I could take my dog for a walk at home, bring along some plastic baggies, and have a much easier time collecting samples. So why am I here?

As unglamorous as they may seem, fecal samples are actually a great, non-invasive way to collect DNA from wild animals. As the children’s book suggests, everybody (animals included) poops. I don’t have to tranquilize the hyenas. I don’t have to poke, prod, or handle them in any way. All I have to do is find them, follow them from a comfortable distance, and wait for nature to take its course. Using fecal samples as a source of DNA saves the hyenas from the potential stress of difficult dartings, especially in areas where the animals are unused to people and their vehicles. It also saves me from stress of sedating animals and taking their well-being into my hands, which primarily means finding a safe spot to put them where they can sleep off the effects of the drugs without being harassed by lions.

Spotted HyenaOkay, so poop is an easy way to get DNA.  But why do I want hyena DNA?  First, let me address hyenas. Spotted hyenas are common throughout most of sub-Saharan Africa, both inside and outside of protected areas. They are also relatively resilient to various forms of human disturbance. I can even lie in bed at night in suburban Nairobi and hear hyenas whooping on occasion. Other large carnivore species, such as cheetahs and lions, don’t seem to fare as well under such human pressure. And the difficulty for a developing country like Kenya is that it’s these more sensitive species, not the spotted hyena, that draw the tourists (and their money). My goal in embarking on this great poop patrol in Kenya has been to collect hyena DNA from protected areas across the southern half of Kenya, which I can then use to look at the genetic health of the hyena populations within each protected area. I want to know whether hyenas from a given area are being isolated, both physically and genetically, from hyenas in other areas by things such as towns, fences, and active persecution by local livestock herders. If such barriers are isolating spotted hyenas, they are almost surely negatively impacting the more sensitive, and economically important, species like cheetahs and lions. For the sake of both wildlife conservation and a healthy tourism industry, management steps will then need to be taken in order to ensure the maintenance of healthy carnivore populations in Kenya’s parks and other protected areas, if not outside of them as well.

Now, why is it that I would travel thousands of miles to personally pick up hyena poop? First and foremost, I strongly believe in conservation efforts, especially in areas like Kenya where there is still much wildlife and the battle is not quite so uphill. Many Kenyans, whether directly tied to conservation or not, are both proud of and passionate about their country and its wildlife. Kenya is filled with national parks, national reserves, private wildlife conservancies, and group ranches, all dedicated to providing wildlife with places to roam. Kenyans are increasingly coming to the realization that improving the situation for wildlife can benefit the economy, from the local level all the way to the national level. Over the long-term, I hope this spells success for much of Kenya’s wildlife and its habitats.

Research camp siteOn a less idealistic level, I also like the adventure of this sort of research. The DNA that I extract from poop is the valuable part of this sample collection process, but being in the field and actually collecting the samples is where the excitement is. I get to travel from park to park, see species that I’ve never seen before in my life, sleep in the middle of the wilderness with only the thin fabric of a tent wall separating me from any creature that comes my way in the night, and call it research. Sure, there are low points too: having car trouble, getting stuck in the middle of the bush at night and having to walk to get help (while imagining every shadow to be one of the lions or hyenas that I heard vocalizing in the distance), finding less-than-cooperative hyenas. It’s all part of the adventure, though. A person can’t help but gain a greater sense of independence after spending time over here. And I’ll leave, hopefully in one piece, with lots of stories to tell my kids someday.

For more information about Andy’s work, you can contact him at boomsan1 at msu dot edu.

Posted in BEACON Researchers at Work | Tagged , , , , | Leave a comment

BEACON Researchers at Work: Evolving digital organisms in physically realistic environments

This week’s BEACON Researchers at Work post is by MSU undergraduate student Jacob Walker.

Photo of Jacob WalkerIn the fall of my freshman year of college I joined a laboratory that would define my undergraduate career. I became an assistant for the Evolving Intelligence Group under Dr. Robert Pennock. My first task was to take the digital organisms evolved in Avida, an artificial life platform, and place their digital DNA into physical robots to observe their behavior. Artificial life is an interdisciplinary field which investigates life in artificial environments, either through simulation in computers or through biochemistry. Avida is a particular software program used among many BEACON participants where digital organisms similar to bacteria reproduce and evolve in virtual Petri dish. My research experience in the lab and subsequently in BEACON has revealed to me how biology can inform artificial intelligence. Many of the solutions that these organisms evolved to solve tasks have captivated me with their ingenuity, leading me to pursue a research career in Computer Science. I am currently pursuing three B.S degrees in Computer Science, Mathematics, and Economics at Michigan State University.

Organism DNA in Avida consists of a list of computer instructions. Avidians are in essence computer programs embedded in a flat grid, performing clean, distinct steps to perform an action. This program has proved extremely valuable to the study of digital and biological evolution. However, I asked myself a question. How could artificial life extend to environments that are physically realistic? In the physical world, life must deal with sensory input that is constantly, smoothly changing. Complex vision, touch, and motor control use signals that simply cannot be reduced to a series of whole numbers. Additionally, animals’ nervous systems are linked to their bodies. Brains receive inputs, process information, and send output in terms of the bodies in which they are embedded. There is no machinery in the human brain to process the eight legs of a spider, nor could a dog brain manipulate human-like fingers. Sometimes the structure of the body itself can carry out implicit computation. For example, with robotic walking legs, the weight distribution of the robot’s body is essential for successful motion.

There have been many artificial life researchers before me which have asked this question. One prominent pioneer is Karl Sims, who in 1994 revolutionized the field with his block-creatures. Sims co-evolved organisms’ brains and bodies, giving these creatures many degrees of freedom. Organisms often evolved into creatures that had a striking resemblance to those in the real world. Cubic fish, sea snakes and even a turtle emerged from experiments selecting for swimming ability. He also bred creatures to chase lights in arbitrary directions, steal cubes, and jump in the air.

Karl Sims’s work spawned a multitude of other experiments involving artificial life in simulated physical environments. However, there has been no software platform in the field equivalent to Avida in terms of its versatility, maturity, and power. Thankfully, my research ambitions were saved by a graduate student of Dr. Chris Adami, Nicolas Chaumont. Nicolas was working on a new, powerful artificial life program called EVO that evolved block-creatures in a manner similar to the experiments of Karl Sims.

I now had the means to conduct some artificial life experiments in a three-dimensional environment. I first attempted to repeat some of the exact experiments of Karl Sims with positive results. My creatures evolved for swimming exhibited bodies similar to tadpoles and sea snakes. Although it took some additional time and effort, I was also able to breed organisms that were able to locate and swim towards light sources in any part of their virtual ocean. My work did not stop here, however.

Karl Sims and later researchers described the emergence of these structured, intelligent organisms, but few if any have tried to understand how these block-creatures work. What computations are performed to achieve these behaviors? How much does the “brain” of the organism depend on the body’s particular shape and size? I dared to venture into the byzantine neural networks of these evolved organisms and found that the solutions these creatures used were surprisingly–if not ingeniously–simple. Some of these organisms had 30 to 40 neurons performing complex mathematical functions, but only 2 to 3 actually generated the behavior of the organism. The rest were simply nonfunctional material, useful not directly to the organism.

Pictures of blocky light-chasing digital organisms

Some evolved light-chasing organisms.

It appears that evolutionary processes can yield solutions that defy human intuition. This was definitely the case with my evolved light-chasing organisms. Organisms are able to “see” a light only based on three numbers that encode the direction of the light relative to the organism. The first value indicates if the light is in front or behind, the second right or left, and the last above or below. One would expect that an organism would check all these values while searching for the light. In reality, organisms evolved controllers that checked only one of these values. This is all that they need to find the light. How are they able to search for a light in any direction? They twist. They all feature a constant, random twist movement in their behavior. Through this random twisting motion, they are able eventually to face the light and swim towards it. This way, the question for the organism becomes much simpler: “Is the light in front of me or not?” If not, keep twisting. If so, then move forward. In short, I believe that intelligent artificial life has the potential to teach us both how biological life works and how intelligence may be implemented in machines.

For more information about Jacob’s work, you can contact him at walke434 at msu dot edu.

Posted in BEACON Researchers at Work | Tagged , , , | Leave a comment

TED Talk: Christoph Adami – Finding life we can't imagine

Watch the new TED talk by BEACON’s Chris Adami!

Posted in BEACON in the News, BEACONites | Tagged | Leave a comment

BEACON Researchers at Work: Using evolutionary computation to enhance breast tumor recognition in microwave images

This week’s BEACON Researchers at Work blog post is by MSU graduate student Blair D. Fleet.

Blair Fleet

In 2010, I received my B.S. in Electrical Engineering from Morgan State University. During a college visit for graduate school, I became introduced to evolutionary computation (EC) through a presentation given by Dr. Erik Goodman. As a result of the presentation, I saw the endless possibilities of using EC to solve and optimize a variety of engineering problems. Evolutionary computation is a topic that encompasses using optimization techniques that have a theme governed by evolution and/or nature, such as genetic algorithms, or particle swarm optimization.  From then on, it became a goal of mine to concentrate my future studies in using evolutionary computation algorithms to solve electrical engineering problems. I am currently pursuing my M.S. in Electrical Engineering at Michigan State University with a concentration of Signal Processing and Evolutionary Computation.

I’ve recently decided to pursue my Ph.D. in Electrical Engineering, which was heavily impacted by the opportunity that Dr. Goodman, the director of BEACON, presented to me. He informed me of collaboration with himself; Prof. Meng Yao, the principal investigator (PI) from East China Normal University located in Shanghai, China; and Dr. John Deller, a professor from Michigan State University whose research concentration is in signal and speech processing. The main goal of this collaboration is to use evolutionary computation to enhance and optimize the signal and image processing techniques being used in the BRATUMASS (Breast Tumor Microwave Sensor System) developed by Prof. Yao et al.

I immediately felt this was the perfect research topic for me because I saw and still see how powerful of an impact this research can have on the community. Unfortunately, African American women have the highest mortality rate from breast cancer of any ethnic group in the United States. This is primarily because of the difference in the awareness of and access to screening tests. I can’t stress enough the importance of finding cancer in the earliest of stages. A late diagnosis means a greater probability of the tumor being more aggressive, which leads to a greater chance of dying from the breast cancer.

Another reason why this research is important to me is because of a pivotal moment in my life, and my family’s lives. Five years ago, my mother was diagnosed with lymphoma, which is a type of blood cancer that can affect various places throughout the body. The cancer, which was located in her chest, was found after it had grown to be the approximate size of a tennis ball. She underwent surgery, and chemotherapy, and has been free of cancer ever since. I couldn’t help but think, what if her cancer was caught in the earliest stages? Would one of her vocal cords be paralyzed as result of her intensive surgery, which came from the advanced size of the tumor? Probably not. The key to surviving any type of cancer is to detect the disease at the earliest of stages, but that requires efficient screening and detection technologies.

Diagram showing A) structure of breast tissue and b) schematic of BRATUMASS detecting position

Figure 1. (a) A structure of breast tissue and (b) A schematic of the BRATUMASS detecting position, where the red dot represents the location of the antenna, the green dot represents the location of the metal slice, A is the transmitting antenna, B is the receiving antenna, and C is the center clapboard.

The BRATUMASS gives promise to a new, innovative way to screen for breast tumors.  The device uses ultrawideband microwave signals, which have a power of approximately 6 mW, to detect breast tumors. Note that this power usage from the microwave signals is less than that of the microwave signals emitted from the average cell phone, which is approximately 1 W. This means that radiation emitted from the antenna has no detectable impact on the human body. The ultimate goal of this research is to be able to use this procedure in place of damaging breast examination procedures such as mammograms, which involve ionizing radiation.

During a typical mammogram, the breasts are tugged, pushed and flattened, which leave patients feeling uncomfortable. During the BRATUMASS screening, one simply lays relaxed on one’s back while the specialist takes the transceiver and metal frame and goes around the circumstance of each breast. The transceiver collects data from several different positions around the breast. At each position, a pulsed microwave signal is sent from the transmitter (A) in the direction determined by the metal frame. Information is collected by the receiver (B) about the electric field based on the reflection and scattering of the microwave pulses.

There are notable differences in the microwave returns between normal breast tissue and malignant breast tissue; however, extensive research still needs to be done in that area. For example, normal breast tissue and malignant breast tumor have different dielectric constants. The dielectric constant for normal breast tissue is approximately 10, while it is usually greater than 50 in malignant breast tissue. The values of the constants will differ from person to person, so the techniques developed have to be applied to a wide variety of people. Our research will involve using the collected data to reconstruct an enhanced image of the breast while selecting and classifying tumors based on the artificial microwave features, e.g. dielectric constants, in addition to many other features characterized from using EC techniques. This research has the potential to serve as the safest means of early detection and localization of breast cancer.

For more information about Blair’s work, you can contact her at fleetbla at msu dot edu.

Posted in BEACON Researchers at Work | Tagged , , , , , | Leave a comment