BEACON Researchers at Work: Playing games in evolution

This week’s BEACON Researchers at Work blog post is by MSU graduate student Jory Schossau.

JoryHave you ever played the game Rock, Paper, Scissors? Did you know you were mimicking the same sort of interactions that happen in communities of microbes? In a standard game we both pick Rock, Paper, or Scissors and reveal what we’ve picked. If I picked Rock, then you hope you picked Paper, because Paper wins over Rock. Scissors beats paper, and Rock beats Scissors. The game is interesting because no single play is best: there is always a way to win or lose no matter what you pick.

courtesy of Geoffery Kehrig

courtesy of Geoffery Kehrig

There are a few ways to make the game more interesting, and that is whether or not to play the game more than once, and if you play with the same person or not. With a repeated game, it gets even more interesting. That’s because you start experiencing the other player’s strategy, and changing your own. Perhaps your opponent usually picks Rock and you start catching on. But the interesting part is that they will experience the change in your own strategy and begin changing theirs as well. So who wins?

The traditional game Rock, Paper, Scissors uses imagery to help players remember the rules. Photo courtesy of Jesse Kruger.

The traditional game Rock, Paper, Scissors uses imagery to help players remember the rules. Photo courtesy of Jesse Kruger.

I use this game for research, except I don’t just play one vs one, I make a thousand virtual creatures crowded together all play Rock, Paper, Scissors with their nearest several neighbors many times. One of the conditions I can change is the reward for winning using a particular play. Instead of rewarding one point for playing Rock, I could reward two points, which would make Rock a really valuable play, but possibly more predictable. Can you predict the outcome of this thousand-player community Rock, Paper, Scissors game?

Bacterial colonies spatially compete for food. Photo courtesy of Xtinabot.

Bacterial colonies spatially compete for food. Photo courtesy of Xtinabot.

Researchers like me use this game to study communities of organisms and predict how those population’s strategies will change over time given different conditions. Even behavior you might not think of as Rock, Paper, Scissors can be described and predicted using games like this. One organism we studied had the interesting behavior that it would sometimes explode, killing itself, but release a poison that kills its enemy or brethren if they aren’t immune. You might think this sort of behavior is self-defeating, and you are somewhat correct. To say it is really bad for the individuals who commits suicide is an understatement. But the nearby surviving immune members of the original community get a huge benefit from this self-destructive behavior because now they can eat the food left uneaten by the enemy. What a weird strategy! How does something like this come about?

In the biological scenario above, organisms don’t move around much, so they live and die in clusters within the larger community. Because genes are passed from parent to child then you can expect clusters of organisms to be closely related. Furthermore, those genes represent a strategy for life. This means a whole cluster of organisms can share the same genes, but some will be self-destructive and others not. It is the clusters of organisms as a whole that represent a strategy, whereas each organism is a single ‘play’ from that strategy and cannot change during its lifetime.

If genetics determines this self-destructive behavior, then clusters with the right amount of self-destructiveness benefit greatly from the killing power of the self-destructive’s poison which leaves extra food around so more offspring can be made. Those offspring, who may be more in number now than their ancestors, will have similar genetics to their ancestors. In contrast, clusters of organisms who develop bad genetics, such as never self-destructing, or always self-destructing, will decline in number over time because they don’t do as well or rarely produce offspring. In this way a strategy to sometimes play ‘self-destruct’ can become the dominant strategy in a community.

An extension of the classic game, known as Rock, Paper, Scissors, Lizard, Spock. Photo courtesy of Jose Silva.

An extension of the classic game, known as Rock, Paper, Scissors, Lizard, Spock. Photo courtesy of Jose Silva.

To be able to see what would happen in different circumstances I used the Rock, Paper, Scissors game to approximate and simulate these biological interactions, but with evolution and a bit of genetics added to the normal game. While these games are simple, using a computer allows me to simulate thousands of games across thousands of generations and to track changes in the population genetics. The genetics is important because it determines which of three plays a new organism would become. That is, the strategy is how often will a new organism be Rock, Paper, or Scissors and the organism will play that for its entire lifetime. It is the genetics which determine what it will be when it is born. In my simulation the three plays correspond to Normal, Immune, and Immune with Self-Destruct. How often these plays occur in the community changes depending on a number of conditions, such as how costly it is to carry the genetic material for immunity, or the deadliness of the poison released by self-destructing. It is through models like these that help researchers understand and predict how natural communities of organisms change over time which is very useful if we depend on those organisms, or want to change those communities.

I haven’t always known I wanted to work with computers to help answer life’s persistent questions. At one point I was close to finishing school with a music degree. Serendipitously, my experience in a summer Research Experience for Undergraduates program let me explore my undiscovered interest for both computing and studying the complex and amazing relationships that make up life. Music is still a big part of my life and it finds a way into my research from time to time whether reviewing a paper about musical evolution, or finding new ways to measure the complexity of a composer’s style based on brain research. Who knows, maybe even musical game theory has a place in my academic future!

For more about Jory’s research, you can contact him at jory at msu do

t edu.

Posted in BEACON Researchers at Work | Tagged , , , , | Leave a comment

E. O. Wilson's Consultation ≠ Collaboration

This post is by MSU graduate student Luis Zaman.

Luis ZamanMany of you have heard about E. O. Wilson’s new article “Great Scientists ≠ Good at Math” in the Wall Street Journal. If you haven’t, you should definitely read it. Wilson uses his difficulties with math as a student, and later as a “32-year-old tenured professor at Harvard” struggling to learn calculus, as encouragement to future scientists. This seems like good advice to me, and I happen to know several successful scientists who appreciated and took comfort in such a prominent figure joining their crusade to be great biologists despite their math woes. However, I also disagree with some of Wilson’s sentiments.

Other blogs have examined this story piece by piece, but I want to focus on just one point: E. O. Wilson’s view of scientific collaboration. I think mathematical and computational biologists are conveyed as second-rate scientists in Wilson’s piece. For him, real science requires intuition, hard work, and focus. After all the imaginative breakthrough science has been done, a number cruncher can be found and added to the project trivially according to “Wilson’s Principle No. 1.” I would call this an antiquated view of collaboration, but I think it would be an unfair generalization of the past; I’d also hate to taint the word collaboration, which has overwhelmingly positive connotations to me, with such a distasteful image. It seems strange that someone who struggled so much with math would suggest it doesn’t also require intuition, hard work, and focus.

I value interdisciplinary approaches to science immensely, especially when trying to understanding fundamental questions about evolution. The BEACON Center for the Study of Evolution in Action is a testament to the success of interdisciplinary science. My formal training is in computer science, mostly from a theoretical perspective. My only biology class was in 9th grade, and I hated it. Now colleagues ask me whether I’m a biologist or computer scientist. It’s a hard question for me to answer, and I like it that way. With the help of BEACON, I’m able to study fundamental questions about coevolution using digital and microbial life jointly with Charles Ofria and Richard Lenski. I get to use vastly different study systems with their unique strengths and weaknesses that require nearly independent sets of skills to master. The people I work with on a daily basis in these two labs range from oceanographers to software developers. How cool is that?

Maybe it’s because I’m just a fledgling scientist, but I believe this type of diverse and collaborative environment nurtures great science. It is exactly opposite to the kind of collaboration that E. O. Wilson is talking about. I have to believe that Wilson knows the value of colleagues that share a fundamental interest in the questions being addressed. That is true collaboration: where intuition and ingenuity are amplified, and hard work is required from start to finish. This level of cooperation requires meaningful crosstalk, and that means a level of proficiency in uncomfortable fields that must be developed. That shouldn’t be something we’re afraid of. Brian McGill wrote a wonderful blog post  inspired by the Wilson vs. Math debate (though not in response to it) suggesting that great science often occurs when mathematical and empirical work intersect. I agree wholeheartedly, but would generalize even further: great science happens when diverse creative minds work together, not when intuitive ideas are supplemented with mere technical consultation.

You can contact Luis at luis dot zaman at gmail dot com.

Posted in BEACONites | Tagged , , , | Leave a comment

Sun, Sand Dollars, and the Huts: My Summer at Friday Harbor Labs

This piece is reposted from the Friday Harbor Laboratories newsletter.

by Ceri Weber
Expected B.S. in Biology at the University of Washington, June 2013
Undergraduate researcher in the Swalla lab
2012 FHL BEACON/BLINKS/NSF REU Intern

I had the wonderful opportunity to do research at Friday Harbor Labs this past summer through the NSF REU program. I had heard a lot about FHL from Dr. Billie Swalla, as well as the graduate students in her lab, but I had never visited the Labs myself. So I was thrilled when I was accepted into the program and I ended up spending the summer studying the effects of salinity on the mechanics of development in the sand dollar Dendraster excentricus with Dr. Michelangelo von Dassow (Mickey).

On my first day at work, Mickey told me to just watch the sand dollar embryos grow. I sat at the microscope for hours, just admiring and drawing what I saw—this was a defining moment of my summer. I had seen embryos in my textbooks and during lectures, but never growing and dividing right in from of me. For the rest of the summer, I spent many hours observing the embryos—taking pictures, measuring embryos, and analyzing my data in order to answer my questions about sand dollar development. If my eyes ever got tired, I would stare out the window of lab 6 and admire the view of the water. This is the magic of Friday Harbor Labs— an opportunity to study biology as it is happening in a beautiful place.

 
Sitting with the microscope and the view from my desk in lab 6

I spent my summer with 20 undergraduates that were also part of the BEACON Blinks NSF REU Internship Program. At first just fellow REU interns, we quickly became a very close and supportive group of friends, affectionately referring to ourselves as the “hut people.” (We lived out in the FHL huts). When not researching, we would explore the island and go on fieldtrips, like whale watching at Lime Kiln, or kayaking, or exploring the intertidal. Even when relaxing at night after a long day at work, we would head down to the docks to admire the bioluminescence or go night lighting. Every day on the island was an adventure and it was made better by a group of students and friends who loved science as much as me.

 
Kayaking fieldtrip near Lime Kiln. REUs looking for whales at Lime Kiln (Photo credit: Sophie George)

Our mentors and other research scientists at FHL talked to us regularly about what it’s like to be a professional scientist, how to do different kinds of analysis, or how to apply to graduate school and get funding—they were always happy to answer any of our questions. Their goal was to get us excited and prepare us for the world of research and the program did just that! I presented my summer research at the Society for Integrative and Comparative Biology meetings this past January. SICB was also a “REUnion” of sorts, as many of the interns on my program presented their research in San Francisco as well. I was nervous in anticipation of the possible difficult questions and critical feedback I could receive, but in the end my presentation couldn’t have gone better! Everyone who talked to me was interested in having a conversation about my research and had excellent ideas and suggestions of what to do next. The REU experience and SICB have definitely inspired and encouraged me to keep doing research and stay involved in the science community.


Me with my poster at the SICB meetings in San Francisco

Posted in BEACON Researchers at Work | Tagged , , , , | Leave a comment

BEACON Researchers at Work: Phenotypic Plasticity and Evolution

This week’s BEACON Researchers at Work blog post is by MSU postdoc Shampa M. Ghosh.

Shampa GhoshIt has been four decades since Thedosius Dobzhansky wrote “Nothing in biology makes sense except in the light of evolution.” It soon became a favorite quote for numerous biologists, while others argued against the apparent over-emphasis of evolution in understanding how biology works. Keeping the controversy aside, one cannot deny that in order to have a holistic understanding of biology, in order to explain why things are the way they are – be it at the level of molecules or the whole-organism, it is essential to have an understanding of evolutionary aspects that shape things. This is especially true for those of us who are enthralled by the natural world around us, the well-crafted biological complexity of the living world, and whose interests span from ‘how things work’ to ‘why things work this way.’ For us, understanding the evolutionary perspective of biological phenomena has no alternative.

I am an integrative biologist and my broad interest lies in understanding the development and evolution of form and function. My research focuses on complex traits and the model organism for my research is the fruit fly Drosophila melanogaster. However, being a part of BEACON confers the advantage of exploring evolution beyond one’s own study system, and getting exposed to a fascinating array of evolutionary research-themes and scientists from diverse disciplines. This has helped me expand my view of evolution, and biology in general.

Pic2The idea of studying ‘Evolution in Action’ has fascinated me ever since I worked on experimental evolution for my PhD at the Evolutionary & Organismal Biology Unit at JNCASR, India. I used laboratory selection approach to study the consequences of selection for rapid development on life-history traits and trait plasticity in Drosophila. After selection for rapid pre-adult development for over 300 generations, these flies underwent a 25% reduction in their development time and 50% reduction in their body size, among other changes. My research also revealed the evolution of partial reproductive isolation between the faster developing populations and their slow-growing ancestors, caused by the divergent body sizes (Ghosh & Joshi 2012).

Based on the findings of my doctoral research, I became fascinated by the evolution of body size and after finishing my PhD, joined the group of Prof. Alexander Shingleton (BEACON/MSU) as a postdoc. The Shingleton laboratory focuses on the developmental regulation and evolution of size and morphological scaling in Drosophila. For me, entering the Shingleton lab was the starting-point of integrating proximate mechanisms with ultimate causes.

The broad theme of my current research is the developmental regulation of phenotypic plasticity and its evolutionary significance. Phenotypic plasticity is the ability of a genotype to produce different phenotypes in different environments, and is almost always adaptive in nature. Phenotypic plasticity can help organisms to cope with short-term environmental changes and survive in new or heterogeneous (over time and/or space) environments. My work focuses on thermal plasticity – that is, the plastic changes in body and organ size of flies in response to developmental temperature – and its adaptive significance. My research spans multiple levels of biological organization. I am using physiology, genetics and genomics to find out how genes, pathways and physiological mechanisms give rise to thermal plasticity of size in flies. In order to understand evolution, biologists often take a top-down approach, exploring past and present patterns of selection to identify the traits and genes that are targets for selection. In the Shingleton lab, we often take an alternate, bottom-up approach, first identifying the genes and molecular mechanisms that control growth and development before exploring how these processes evolve to generate morphological diversity.

Inverse relationship between developmental temperature and body size in fruit fliesMy first approach was to study the physiological basis of thermal plasticity. About 85% of ectothermic animals, including Drosophila, show an inverse relationship between developmental temperature and body size called the ‘temperature-size rule’ (TSR), the proximate and ultimate causes of which are poorly understood. The TSR has been viewed by many as a biophysical constraint caused by the effect of temperature on the biochemical processes of growth, and not an adaptive phenomenon. According to other views, however, TSR is adaptive, evident from the observation that the evolutionary response of natural populations adapted in different thermal climates is the same as the plastic responses to rearing temperature: populations at lower latitudes (warm) evolve smaller body size compared to the ones from higher latitudes (cold) in most ectothermic species.

If the TSR reflects a phenomenon that is purely biophysical in nature as opposed to an adaptive response, one would expect it to have a common mechanistic basis across taxa. I have recently demonstrated that the TSR in Drosophila results from developmental mechanisms that are completely different than the mechanisms that regulate the TSR in another insect, the tobacco hornworm Manduca sexta (Ghosh et al., in press). This suggests that the TSR can result from a diversity of mechanisms across taxa and hence represents an adaptation rather than a biophysical constraint. We are yet to identify what the TSR is an adaptation to, but we believe that identifying the focal traits that give rise to the TSR can potentially help us to understand the selective causes that lead to it.

Thermal plasticity in fruit fly wingsAs a different approach to study plasticity, I am also using a genome-wide association analysis (GWAS) to understand the genetic basis of thermal plasticity in flies. For this purpose I am using The Drosophila Genetic Reference Panel (DGRP), a population of flies consisting of 192 inbred lines. Each DGRP line is isogenic and has been fully sequenced, and both the flies and the genomic data are publicly available. I have measured the degree of thermal plasticity in three different organs (wing, thorax and femur) in 100 DGRP lines and using GWAS to identify the genes that are associated with variation in thermal plasticity for the three organs. I am also screening the effect of mutations in candidate developmental genes on the degree of thermal plasticity. My current approaches promise to give me a good understanding of the proximate mechanism of thermal plasticity. In the future, I plan to extend my research to understand two different evolutionary aspects of plasticity: (a) the origin and evolution of plasticity, and (b) the role of plasticity in evolution.

Being a part of BEACON has given me the chance to interact with biologists from diverse backgrounds, from computational scientists to experimental biologists, and with people who are studying evolution in dig

ital organisms, in laboratory based biological systems and in species in their natural habitat. This has helped me to get a better understanding of evolutionary processes observed across systems. I have also been a member of the BEACON Speciation Consortium headed by Prof. Jenny Boughman from MSU and Prof. Luke Harmon from UI, where I have interacted and brainstormed with many other BEACONites on topics related to multidimensional adaptation and speciation. Although there is little overlap between speciation and my current work on plasticity, my exposure to the evolution of reproductive isolation during my PhD had also sparked my interest in speciation. I have absolutely loved this opportunity and flexibility to indulge intellectually outside my main research area, and to be involved in what excites me. I do not think such opportunities would have been possible had I not been a member of BEACON. Overall, being a part of BEACON has helped me grow as an evolutionary biologist and expand my intellectual horizon.

For more information about Shampa’s work, you can contact her at modak at msu dot edu.

Posted in BEACON Researchers at Work | Tagged , , , , | Leave a comment

Evolutionary excursion into the depth of the human psyche?

This post is by MSU postdoc Arend Hintze.

Ralph Hertwig, director of Max Planck Institute for Human Development

Ralph Hertwig, director of Max Planck Institute for Human Development

Let me tell you about my excursion to the Max-Planck-Institut für Bildungsforschung (Max Planck Institute for Human Development) in Berlin. I met the director Ralph Hertwig a while ago interviewing for a job, and while we quickly figured out that our disciplines are very different from each other, we still had the feeling that we would benefit from each others’ perspective. My trip now was meant to explore these perspectives. The group I stayed with calls itself ARC which is the scrambled three letter acronym for “Center for Adaptive Rationality” – I was told that many people are mistaken when they assume one has to abbreviate that with CAR. The group primarily consists of Psychologists, but they also have Neuroscientists, Philosophers, and Biologists, which already shows their interest in interdisciplinary work, but they are also very interested in evolutionary models and questions regarding the origin and evolution of adaptive rationality.

I think that “Adaptive Rationality” already sounds cool, and it seems to be very similar to the goal we are after when saying that we would like to evolve artificial intelligence or artificial consciousness. But as you will see, while these topics are closely related, the two approaches could not be more different. Let me elaborate on this matter:

Here at BEACON we study evolution in action, and our core collaboration is between computer science and biology. When we look at an organism and ponder about its peculiar behaviors, it doesn’t take as long until we ask how evolution was involved, which selection pressures were responsible, how we could use that in an application, how this behavior is different to the one we see in our model organism, or, in my case, how I could model and evolve these behaviors – we think that nothing in Biology makes sense unless seen in the light of evolution. Psychology is – how should I put this – shy when it comes to evolution and human behavior. And there is a very good reason. Everything that makes us humans different and unique has in one way or the other to do with intelligence. That means that the selection pressures and evolutionary processes that made us, and allowed our intelligence to emerge had no precedent, happened once, and can therefore not be understood as a general principle. It is the same problem we have when we want to explain the emergence of life itself. It happened once, and in order to make generalizations we need more than one example.

But there is more. One of the main arguments against any evolutionary explanation is the following: The part of your brain we now use to solve this or that problem could have evolved to do something completely else. I call this the “opportunistic repurposing argument” (abbreviated RAO, in case you paid attention earlier). I obviously have difficulties with this categorical denial. After all, Occam’s razor suggests that every time you use the “opportunistic repurposing argument” as an excuse why the behavior you study in humans can not be explained by evolution, you introduce an second additional hypothesis. The first hypothesis says that the mechanism evolved for something, and the second one claims that such a mechanism can now be used for something else. While you can never disprove this argument in any specific case, clearly you can not use this argument every time. In most cases we probably use a cognitive mechanism in a way it was evolved to function, and only in few cases we repurposed something – still the counterargument stands.

This idea, however, has wide ranging implications for our endeavor to evolve artificial (human-like) intelligence. If we more or less concluded that we will not unveil the exact circumstances that lead to the evolution of human intelligence, how should we be able to construct and model fitness landscapes that are conducive for the evolution of artificial intelligence?

All of the above is not something I learned in Berlin; I was aware of these difficulties beforehand. You could either call defeat or see this as a challenge – challenge accepted!

As it turns out, you can do a lot of things, and I promise to write about the projects we worked on once they are published. But let me give you an impression of the things I did, and that were meaningful to both fields (in my opinion). Human behavior often seems to be irrational and humans very often don’t make the choice that pure rationality (Nash Equilibrium) dictates, and here two fields of science clash. One is Economics, which tries to explain how the choice depends on economic (selfish) considerations, and the other is Psychology, which tries to find cognitive causes like risk aversion, curiosity, cause of habit, or preferences that keep us from making the “right” choice. This contrast gives us an ideal angle to add our evolutionary opinion to this debate. We find that evolution can provide an additional opinion, and show where the economic models fall short, or where psychological influences are justified and can explain how choice preferences emerge or are maintained.

In summary, I think that from all the interdisciplinary work I did so far, this was the biggest straddle. Evolutionary game theory and modeling oversimplifies, and the abstractions used might work with animal behavior but not necessarily with more complex human behavior. My angle appears to be primitive at times in a field where nothing is simple. It seems that every facet of human behavior has been studied, and a plethora of possible explanations exists, which all rival each other. Decisive experiments are hard to do, and the context and framing is way more important than I am used to, but that makes contributions also much more valuable. I have the impression that psychologists are well aware that evolution matters and they appreciate the input other fields provide, but this is a tricky endeavor and has to be done right. At the same time, my exploration into this field made me aware how far artificial intelligence is from grasping the foundations of human cognitive processes, and how difficult it will be to evolve artificial intelligence – but again, we are not doing what we are doing because it is simple.

Cheers, Arend.

Posted in Notes from the Field | Tagged , , , , | Leave a comment

BEACON Researchers at Work: Trying to Bottle Natural Evolution's Creativity

This week’s BEACON Researchers at Work post is by University of Texas at Austin postdoc Joel Lehman

Joel LehmanSomething that both fascinates me and drives my research is the creativity of natural evolution. If you asked me to solve a particular problem, I could probably come up with a few possible solutions. And maybe if inspiration struck, one or two of those solutions might be unconventional. But single-digit numbers don’t begin to compare with evolution’s creative drive. In fact, the number of species currently living on Earth is estimated to be around 9 million. Put another way, natural evolution currently has in its working inventory nine million working solutions to the problem of living. However, many more solutions evolution has contemplated but eventually discarded through extinction.

One way I like to think about natural evolution relates to some kind of vast closet. If you stick an ordinary bicycle into this enormous closet, close the door, and come back in five billion years, it’s very unlikely that the pile of rust that greets you will surprise you in any way. The magic of evolution is that if it was put into the closet for five billion years, the closet might well be teeming with surprises upon its reopening.

The diversity of life on Earth is what leads us to expect to be surprised by the outcome of this thought experiment: Life on our planet has been evolving for a similar amount of time, and there are many incredible species that have evolved. The archerfish hunts its prey (generally insects) by shooting them with a compressed stream of water; and these fish are remarkably accurate, rarely missing, and their aim impressively factors in how water refracts light. Woodpeckers have evolved cushioning in their brains, to keep their skulls safe while hammering away at trees. The bee orchid deceives male bees into attempting to mate with it by mimicking the female bee shape and scent. There are numerous other examples of surprising adaptations — but you get the point: Natural evolution is a profoundly creative process.

Ordinarily we associate creativity with human innovators or artists, and not with processes far removed from human oversight. So what fascinates me in particular about natural evolution is that it gives such a divergent example of creativity. Evolution’s creativity does not result from human influence. In this way, what motivates my research is the hope that examining what enables natural evolution’s creativity might lead to creating more creative algorithms. In other words, my research centers on trying to bottle up the essence of natural evolution’s creativity in a computer program.

The idea of a creative algorithm might at first seem like a contradiction because our experience with computers is generally that they do only what we tell them to do, which is the exact opposite of creativity. When someone acts mechanically without thinking, we actually might call them a computer or a robot. However, a computer program is at heart just some kind of formal recipe, and there might exist a recipe for creativity. In fact, researchers in the field of evolutionary computation have created many such creative recipes called evolutionary algorithms, which are computer programs inspired by natural evolution, and are often applied to solve engineering problems.

My personal interest lies not so much in solving particular engineering problems as in having an algorithm continually generate an increasing diversity of cool things, some of which may surprise me. This goal is sometimes called open-ended evolution, and contrasts with the more practical focus of many other researchers of making evolutionary algorithms better at solving problems. Of course, both perspectives are interesting in their own right for different reasons.

An algorithm I created with my Ph.D. advisor Ken Stanley, when working towards my dissertation, is called novelty search. Novelty search is inspired by one particular facet of natural evolution, its tendency to accumulate novel forms over time. That is, natural evolution generally tends to spread through niches and becomes more diverse. Inspired by this tendency, the algorithm attempts to explicitly create things that are different from what it’s made in the past. Instead of evolving things that are increasingly better at achieving some predefined goal, which is how evolutionary algorithms are often applied, it evolves things that are novel in some way when compared to previously evolved things. In other words, the algorithm is constantly struggling to innovate.

What’s interesting about the algorithm is that striving towards novelty often produces meaningful results. You might think that if you applied such an undirected algorithm to some problem domain it might just produce an unending parade of useless but perhaps trivially different things. However, producing novelty often requires learning things. For example, we may have more confidence that a pro skateboarder will be better able to invent a truly novel skateboard trick than someone who’s never skateboarded before, because the pro skateboarder has plenty of useful skateboard-related information in his or her brain.

Evolved robots

In a similar way, imagine applying novelty search to evolve robots that can navigate through a maze. This is a common sort of task in a field called evolutionary robotics. At first, robots might just crash into walls; and if particular crashes had not yet been encountered by novelty search, they would be rewarded — that’s what the algorithm does, it rewards doing something different without any eye towards practicality. However, to be novel after some time might require learning about how to drive in a straight line and react to walls in the maze. In fact, surprisingly it’s been shown that in some mazes searching for novel robot maze navigators can yield better results than the more direct approach of simply searching for robots better at reaching the end of the maze. A similar result was achieved when evolving biped robots learning to walk. That is, in one experiment, rewarding simulated biped robots that attempted to walk or fall in a novel way proved more effective than rewarding them directly to walk further. And in my favorite experiment so far, a variant of novelty search evolved a wide diversity of lumbering virtual creatures in a single run, perhaps a step towards an algorithm more resembling in outcome natural evolution.

What I find most interesting about these results is that they show the potential of driving algorithms through creativity. Of course, the kind of creativity that novelty search produces is primitive when compared to human creativity or natural evolution. And as you might guess, there remain many open questions and much work to be done to create highly-creative computer programs. But these are the kinds of questions that I love to ponder, and that I am lucky enough to be able to explore as a BEACON post-doc.

For more information about Joel’s work, you can contact him at joel at cs dot utexas dot edu.

Posted in BEACON Researchers at Work | Tagged , , | Leave a comment

BEACON Researchers at Work: The Origin of a Species?

This week’s BEACON Researchers at Work blog post is by MSU postdoc Zachary Blount.

Zack in a treeI love big questions. I tend to walk around, my head in the clouds, questions flitting through my head. I admit that I have walked into trees whilst in oblivious abstraction. One of the best parts of being a scientist is that I get to work on answering big questions. Biology features a lot of big questions, and how new species evolve is one of the biggest and most important. Speciation increases diversity and complexity, and, importantly, it allows organisms to explore new evolutionary paths. There is little in biology that isn’t touched by speciation, and it is little wonder that Darwin himself referred to it as “that mystery of mysteries.”

Despite all the fantastic work done since Darwin’s day, speciation is still mysterious. Speciation is complex, multifaceted, tricky to study, and, most importantly, hard to “catch in the act.” It would help if we had a model system in which we could study speciation in fine detail as it occurs, examine and manipulate the processes involved, and to do so over a humanly reasonable time scale.

My work as a BEACON postdoc deals with a possible laboratory speciation event that might be a good model. It arose in the course of the E. coli Long-term Evolution Experiment (LTEE), which recently celebrated its 25th anniversary. The experiment was begun in 1988, when Richard Lenski founded 12 identical E. coli populations. Every day 1% of each population is transferred to a fresh growth medium containing a small amount of glucose for food. Each population grows by about 6.64 generations per day, and so far each has evolved for more than 55,000 generations! Samples are frozen every 500 generations, and because bacteria survive freezing, we have a complete, viable fossil record of the evolution of each.

cloudy vs clear beakerIn early 2003, 15 years and 33,000 generations into the experiment, something nifty happened. One of the 12 populations, called Ara-3, became much cloudier, meaning that it had suddenly gotten much larger. What happened? The growth medium contains a potential second food source called citrate, which is added to help the bacteria take up iron. I say potential because E. coli cannot grow on citrate when oxygen is present, as it is during the experiment. This Cit phenotype is one of the major traits used to define E. coli as a species, in large part because mutant E. coli that can grow aerobically on citrate (Cit+) are incredibly rare. (Only one was reported in the entire 20th century!) And yet, one of the populations was now full of Cit+ E. coli. (My doctoral research examined how Cit+ evolved. That story is told in full in my dissertation defense video.) Amazingly, Cit bacteria continued to coexist in the Cit+-dominated population.

Ara-3 Phylogenetic Tree

Is Cit+ a new species? That is a trickier question than you might think. Speciation is a process and not a sudden, instantaneous event, and there is no single, universally accepted species definition. (This reflects how nature really doesn’t conform to the human need for sharply-drawn categories.) The most widely accepted, however, is Ernst Mayr’s Biological Species Concept (BSC), which equates speciation with the evolution of reproductive isolation. This means that a group of organisms is a new species when its members can mate, mingle genes, and produce fertile offspring with each other, but not with members of its parent species.

Ecotype Species ConceptUnfortunately, the BSC is hard to apply to asexual bacteria. The most compelling bacterial species concept, Frederick Cohan’s Ecotype Species Concept (ESC), emphasizes the evolution of ecological differences. It argues that a bacterial species is born when a mutation grants access to a new niche, resulting in a new lineage called an ecotype. Because the new ecotype and its parent occupy difference niches, they are able to coexist and evolve independently. Under the ESC, a new species must have a mutation that gives access to a new niche, have originated once, and it must be pursuing its own evolutionary path.

Does Cit+ fit the characteristics of an ecotype species? The Cit+ trait opened a niche that is new to E. coli. If we look at a phylogenetic tree of Ara-3, we see that the Cit+ lineage evolved once, and forms a new branch on the tree. We can also see that the Cit clones that persist after Cit+ became dominant are on another branch, meaning that Cit+ and Cit are evolving separately. Cit+ might be a new ecotype species…

Despite these findings, I am not quite comfortable calling Cit+ a new species just yet. If Cit+ really is evolving to become well-adapted to the citrate niche, it might become better at growing on citrate, and worse at growing on glucose. This pattern could mean that some of the same mutations that are making Cit+ better at eating citrate are also making it worse at eating glucose. I can even test this. I can use genetic engineering techniques to move candidate beneficial mutations I find in later Cit+ genomes into earlier Cit+ genomes, and then measure their fitness effects with competition experiments. I can then move the Cit+-beneficial mutations I find into Cit genomes to see if they reduce fitness in the glucose niche. If they do, these citrate Niche-Specific Adaptive Mutations (NSAMs) would have a really nifty consequence. Even though bacteria don’t reproduce sexually, NSAMs are the sort of genetic changes that can produce reproductive barriers between new sexual species. Finding citrate NSAMs would therefore mean that Cit+ is evolving exactly the sort of genetic changes expected in species that also fit the Biological Species Concept. Then I would feel comfortable saying that Cit+ is a new species!

I have a lot work to do to test these ideas. However, if Cit+ really can be considered a new species, it would mean that Cit+ will be both a great example of evolution in action, and a model with which to investigate the tricky mysteries and big questions of speciation. Ahh! It’s great to be a scientist!

For more information ab
out Zack’s work, you can contact him at blountza at msu dot edu.

Posted in BEACON Researchers at Work | Tagged , , , , , , | Leave a comment

BEACON Researchers at Work: Testing Phylogenetic Inference with Experimental Evolution

This week’s BEACON Researchers at Work blog post is by MSU graduate student Cory Kohn.

Cory KohnSkepticism. This is generally an important characteristic of scientists. Why would an attitude that is to be avoided in polite conversation act as a useful, even important, part of the scientific mindset? Good hypotheses will stand up to rigorous scrutiny. Those aspects of the explanations constructed to explain the world around us that do not stand up to scrutiny need to be re-evaluated. Skepticism is at the heart of both validation for good hypotheses and identification of those parts of science that need to be re-evaluated. Thus, being skeptical about what we think we know – and more importantly why we think we know it – is integral to advancing our understanding of the world around us. Skepticism is not about vicious personal attacks, but the key to assessment of one another’s results and methodologies so that we have a greater chance of formulating an accurate knowledge base about the world. Skepticism is my specialty – I test just how well scientific methods work as tools to understand the evolutionary history of organisms.

A phylogenetic hypothesis depicting the evolutionary relatedness within and among Bacteria, Archaea, and Eukaryota.

A phylogenetic hypothesis depicting the evolutionary relatedness within and among Bacteria, Archaea, and Eukaryota.

Specifically, my research involves testing the methodologies we use in phylogenetic inference. Phylogenies are the branching patterns that illustrate common descent. They depict the historical relationships between taxa by chronologically showing who is more closely related to whom. A phylogeny is by necessity a hypothesis – none of us actually witnessed the speciation events that gave rise to the diversity of life.

We construct our phylogenetic hypotheses by using the data we currently have available. Historically this data was primarily morphological, or perhaps behavioral; hypotheses about evolutionary relationships were the purview of taxonomic specialists who, through considerable experience, developed such a high degree of familiarity with their specimens that they could identify the key characteristics of organisms that indicate the historical pattern of relatedness. The problem with this methodology is the potential ambiguity of determining which characters indicate relatedness and not convergent or parallel evolution. For example, a non-evolutionary informative character would be the ability to fly. If we put significant weight in this character then we might hypothesize that flying birds, bats, and flies are more related to each other than they are to flightless birds, mammals, and insects. This highly simplistic example is illustrative of how problematic it can be to rely on characters that a researcher deems to be important. Cost, effort, and this issue of objective criteria for character selection is primarily why molecular sequence data, for example DNA, has increasingly become the data of choice over the past few decades. Such data can be used to hopefully construct an unbiased, more scientifically sound hypothesis regarding evolutionary relatedness.

With the molecular, computational, and statistically revolutions of the past few decades there is a plethora of phylogenetic methodologies and tools available for scientists to use in constructing their hypotheses regarding evolutionary relationships.

At first hearing it, you might be surprised to learn that none of these methods have been rigorously tested experimentally, but remember that phylogenetics involves the inference of historical patterns from existing (extant) data. Perhaps, regarding the validity of these tools, we aren’t being quite as skeptical as we probably should be.

That’s not to say these phylogenetic inference tools don’t work. They definitely have a sound theoretical basis and many have been tested using simulated sequence data. Yet simulated sequence data is only useful to test the range of conditions under which the simulation has been designed. Experimentally testing these tools would entail constructing a set of known descendant-ancestor relationships under biological conditions, using the resulting data to generate the hypothesized phylogeny, and comparing the results of the method to the known pattern of evolutionary relationships. Experimental tests have very rarely been done, for obvious reasons of labor and time. My research directly addresses this deficiency.

Experimentally validating phylogenetic reconstruction techniques is necessarily difficult when using biological organisms. One must observe evolution taking place, fully documenting the evolutionary branching pattern in order to compare the hypothesized phylogeny to historical reality. Feasibly, this can only be accomplished by using quickly evolving organisms such as viruses and perhaps bacteria.

I can generate even more information about the strengths and potential deficiencies of phylogenetic tools through using non-biological yet fully evolving populations. The artificial life platform Avida makes it relatively quick and easy to evolve a known evolutionary branching pattern. Further, I can know much more about my digitally evolving system than I otherwise could if I used biological organisms. I have a record of the genome for every digital organism in each of my populations. I can determine which mutations gave rise to phylogenetically informative or potentially misleading information, the evolutionary processes that gave rise to observed patterns, and these data  allow me to directly understand how phylogenetic tools can generate inaccurate or misleading results.

Unlike simulations, an actually evolving system can be directly subjected to virtually all complexities found in biological reality. For example, while selection is for the most part disregarded in phylogenetic inference methodologies, I can evolve my populations under varying degrees of selective pressures to understand how selection influences the results of these methods. I can further manipulate genetic recombination, and the complexity of the known biological relationships by altering the number of taxa, amount of evolution between branching events, and degree of asymmetry in the branching structure.

An example of the true power of digital evolution is shown with selection. I can determine the exact fitness consequences of each mutation that eventually become fixed within a population. I can also determine how long each mutation takes to become fixed and when along the branching structure all of this occurs.

The known evolutionary relationships among eight Avida lineages evolved with recombination for 100 generations along each branch. Super-imposed along this phylogeny is the identity of every mutation that became fixed in the population, ie was eventually possessed by each member. In red are mutations that arose and became fixed along a single branch; in blue are those that became fixed on a subsequent branch, with a dashed line indicating a loss of the mutation in the sister population. For example, C257S (along the top left branch) denotes that the instruction C at genomic location 257 mutated to S around generation 20 and became fixed at generation 90.

The known evolutionary relationships among eight Avida lineages evolved with recombination for 100 generations along each branch. Super-imposed along this phylogeny is the identity of every mutation that became fixed in the population, ie was eventually possessed by each member. In red are mutations that arose and became fixed along a single branch; in blue are those that became fixed on a subsequent branch, with a dashed line indicating a loss of the mutation in the sister population. For example, C257S (along the top left branch) denotes that the instruction C at genomic location 257 mutated to S around generation 20 and became fixed at generation 90.

In sum, digital evolution affords me the ability to precisely understand the range of conditions under which current phylogenetic methodologies are valid, and further, identify any potential shortcomings of these methods so that we can construct even better tools. Being skeptical has its benefits.

For more information about Cory’s work, you can contact him at kohncory at msu dot edu.

Posted in BEACON Researchers at Work | Tagged , , , | Leave a comment

BEACON Researchers at Work: Digital Macroevolution

This week’s BEACON Researchers at Work post is by University of Idaho faculty member Luke Harmon.

A day gecko, Phelsuma ornata, represents a particularly charismatic subject for the study of macroevolution.

A day gecko, Phelsuma ornata, represents a particularly charismatic subject for the study of macroevolution.

I am a researcher who typically studies evolution over very long time scales – tens to hundreds of millions of years. For example, we’ve published papers analyzing trait evolution, speciation and extinction at scales including cichlids, day geckos, and jawed vertebrates. As such, I’ve thought a lot about my relationship to BEACON’s theme of “Evolution in Action.” Certainly we can see “evolution in action” over any time scale – but many BEACON projects focus on evolutionary change that we can observe from one generation to the next. What my work over long time scales has taught me is that it is sometimes worthwhile to “zoom out,” and to investigate the long-term consequences of processes acting over extraordinarily long time scales. One fantastic example of the value of such an approach can be seen in the Lenski lab’s long-term experimental lines of E. coli – a simple experiment which, repeated over twenty-five years, has given profound insights into evolution.

What would the BEACON logo look like if we evolved it for over a billion generations?

What would the BEACON logo look like if we evolved it for over a billion generations?

I believe that this “long-term” perspective applies equally well to all applications of evolution, including both biological and digital. In this essay, I will explore what I view as some of the main lessons of evolutionary studies over long time scales. As computers become faster, digital evolution will undoubtably be more akin to “macroevolution” – evolution across the entire history of the Earth – than what might happen in a few generations as a population adapts to its environment.

First, some background: evolutionary biologists sometimes use the term “microevolution” to describe evolution within a population over a few generations. This is sometimes contrasted with “macroevolution,” which can mean evolution over very long time scales, or sometimes evolution “above the level of species.” There has been some debate about the relationship between these two levels. For example, most evolutionary biologists believe that macroevolution is simply microevolution “writ large” – that is, macroevolution is what happens when you scale microevolutionary processes over extremely long time scales. By contrast, some scientists believe that there are distinct macroevolutionary processes that cannot be described in terms of the common set of microevolutionary models. The best example of this way of thinking is Stephen J. Gould, whose theory of punctuated equilibrium supposed that all sustained trait change occurred at the moment of speciation. At other times, according to Gould, lineages underwent evolutionary stasis – we might see them change a little from one generation to the next, but these changes were almost always transient and did not help explain broad-scale patterns that we see in the fossil record and across the tree of life. To Gould, a key property of punctuated equilibrium is that higher-level processes – that is, selection among units that are more inclusive than individual organisms – dominate long-term patterns.

Gould’s theories – and related ideas of species selection – are, as you might expect, controversial. It seems clear to me that punctuated equilibrium is not a general pattern – although the model is still considered by quantitative paleontologists and others. And species selection, though revitalized by new phylogenetic approaches, is still controversial. But Gould illustrates that there are some questions that can really only be addressed by studying evolution at the broadest scales. Furthermore, Gould’s observation that “stasis is data” is a key idea in the field.

I think that modern macroevolutionary studies can provide three relatively simple take-home points that are relevant when thinking about what might happen when we scale up our experiments, observations, or digital organisms over very long time scales:

1. Short-term rates do not usually scale up to longer time scales. One peculiar pattern that seems to be shared across evolutionary studies over a huge range of temporal and spatial scales is that the ‘apparent’ rate of evolutionary change depends strongly on the time scale over which those rates are measured. As first observed by Phillip Gingerich over 30 years ago, we see the fastest rates of evolution over the shortest time scales. However, this rapid change does not translate into large amounts of change over longer time scales; much of the change we see over short time scales seems to be ephemeral. This may be due to rapid reversals in selection, so that the change we see in one generation is undone in the next.

2. Life is characterized by high turnover. On average, over the history of the earth, speciation and extinction are nearly in balance. Clades have periods of growth and collapse, and taxa wax and wane; but on average, speciation and extinction are – to an approximation – equal in rate. Even more intriguingly, there is some evidence that speciation and extinction tend to change in lockstep with one another – that is, things that increase the speciation rate of a clade also tend to increase the rate of extinction; and things that prevent extinction might also slow down speciation rates (see Stanley’s interesting book, Macroevolution, for more details on this).

3. Novelty takes time. We do not yet know as much as one might hope about the evolution of novel traits – although new developments in genomics and developmental biology will likely lead the way to a deeper understanding of novelty in the future. Nonetheless, the branches of the tree of life are marked by a few key, but extremely rare, events: the origins of flight, endothermy, photosynthesis, and others. These and other rare characteristics sometimes provide the only evidence we have for deep short branches in the tree of life.

I’m not sure whether these ideas are helpful to those of you who study “evolution in action.” I don’t think it would be hard to argue that the temporal “scale” of digital evolution is constantly being compressed, and that such experiments really live in the world of “macro” – rather than “micro” – evolution.

For more information about Luke’s work, you can contact him at lukeh at uidaho dot edu.

Posted in BEACON Researchers at Work | Tagged | Leave a comment

BEACON Researchers at Work: Evolving Genome Libraries

This week’s BEACON Researchers at Work post is by University of Texas at Austin graduate student Peter Enyeart. 

Peter Enyeart June 2011 (photo by Mario Gallucci)

Photo by Mario Gallucci, http://www.galluccidesign.com/

I love bacteria. That may seem like a strange thing to say, but I really do. When most people think of bacteria, they think of lurking dangers waiting to make us sick. But in reality only a very small fraction of bacteria cause disease, and more than a few do a great deal to help us. In fact, there are more cells of bacteria in your body than there are cells of you. I’ve always loved the idea of discovering more about this unseen world that permeates us.

Bacteria can also do all sorts of amazing things. They can produce electricity and clean up heavy-metal waste (including uranium), survive radiation thousands of times stronger than a human could withstand, build their own magnets to use as compasses, eat oil, and make fuels, to name just a few examples. We’re actually in the midst of something of a golden age in expanding our knowledge of just what is out there in the microbial world. Bacteria never cease to amaze me with the inventive things they can do and the difficult environments they can survive in.

Additionally, as a biologist trying to understand and control the molecular processes of living things, I like bacteria because they represent a sweet spot in complexity. You don’t go into a field like this if you’re not interested in figuring out complicated systems, but some systems are more complicated than others. For instance, E. coli, the most commonly studied bacterium, has a genome size of about 4.6 million base-pairs of DNA, containing about 4000 genes, all on one chromosome. That’s pretty complicated. But compare that to the human genome, which has two versions of 23 different chromosomes, the smallest of which is ten times bigger than the E. coli genome, for a total of 3 billion base pairs of DNA and about 20,000 genes (which can be spliced in many different ways) per set of 23. That’s starting to get crazy. The cellular structure of eukaryotes (which include us) is also much more complicated than bacteria; in fact, some of the structures in eukaryotic cells seem to have originally been bacteria. It’s like cells within cells in there.

So I like bacteria because they’re complicated enough to be interesting, but simple enough that we can understand them much better and reprogram them much more easily than we can our own cells. For example, my research focuses on bacterial genome engineering. I want to rewrite the bacterial code to get them do to new things, and hopefully gain a better understanding of how they work in the process. One of my projects involves making libraries of bacteria with different genomic rearrangements, and then competing them against each other to see which rearrangements are most beneficial. Evolution is a powerful engineering tool. (See here for a video of me explaining another of my projects to middle school students.)

The motivations for this come from both basic and applied science. One interesting thing about bacterial genomes is that, while they vary a great deal in gene content, their overall structure tends to be very similar. This is in contrast to eukaryotes like fungi, plants, and animals, which have all kinds of different genome arrangements but relatively little variation in the types of genes found in them. So the question arises, has the structure of the bacterial genome evolved to the “best” possible structure? Or is it just one of many possibilities that might work as well or better, but are just difficult to evolve to from the current state? Most of the efforts to address this question only looked at a small number of rearrangements, but there are a lot more possibilities out there to examine before we can call the case closed. And if in the process of answering this question we learn how to adapt genomes for specific purposes, all the better.

Figure 1. Making genomic libraries.  Lox sites are represented by arrows.  IRs (inverted repeats) are DNA sequences that mark off a transposon (which requires a "transposase" to move around).  The marker is an antibiotic resistance gene that allows us to kill any cells that don't insert the transposon into their genomes.  When the Cre protein comes in, it deletes the marker between the lox sites and causes a rearrangement with a lox site elsewhere in the genome (represented by grey color-coded DNA being replaced by black color-coded DNA on one side).

Figure 1. Making genomic libraries. Lox sites are represented by arrows. IRs (inverted repeats) are DNA sequences that mark off a transposon (which requires a “transposase” to move around). The marker is an antibiotic resistance gene that allows us to kill any cells that don’t insert the transposon into their genomes. When the Cre protein comes in, it deletes the marker between the lox sites and causes a rearrangement with a lox site elsewhere in the genome (represented by grey color-coded DNA being replaced by black color-coded DNA on one side).

So how do we look at a huge number of rearrangements at once? We need two components: one is a DNA element that allows rearrangements to be made, and the other is a mechanism for randomly placing those elements throughout the genome. For the former, we use a 34-base sequence of DNA called a lox site.   A protein called Cre will bring lox sites together and recombine them.  This results in the DNA between the lox sites being either deleted or inverted.  A chunk of DNA deleted in this way can also reinsert at the same or another lox site, allowing for cut-and-paste operations. To place the lox sites, we put them in transposons, sometimes called “jumping genes,” which are pieces of DNA that randomly insert themselves into other pieces of DNA (like genomes). See Figure 1 for a visual depiction of how this works.

Using this method we will be able to build a library that represents all the possible rearrangements between all the genes in the E. coli genome. We can then compete them, sequence their genomes, and program computers to tell us what came out the other end.  (This points up one of the advantages of working in bacteria; obtaining and analyzing a similarly complete set of rearrangements for the human genome would be extremely difficult.) Figure 2 shows some actual data from an initial experiment on a library of about 40,000 different genome rearrangements. So far it looks like the cells do like the original structure, but I’m excited to see if we find anything the next time we do this on a library of at least one million genomes.  Stay tuned!

Figure 2

Figure 2. Visualized data for genomic rearrangements. The data on the left was obtained several hours after introducing Cre into cells containing three lox sites per genome, and that one the right was obtained 100 generations later. The outer blue ring indicates the frequency of unrecombined transposon insertions, the red ring represents how frequently we see that piece of the genome deleted, the brown ring represents the different structural domains of the genome (with “oriC” and “dif” being the sites of initiation and termination of replication), and the grey lines in the center represent the inversions we saw, with brighter lines representing more frequent inversions.

For more information about Peter’s work, you can contact him at peter dot enyeart at utexas dot edu.

Posted in BEACON Researchers at Work | Tagged , , , , | Leave a comment