BEACON Researchers at Work: Hemichordate Global Biodiversity and Evolution

This week’s BEACON Researchers at Work blog post is by University of Washington postdoc Charlotte Konikoff.

Research in the Swalla lab broadly focuses on elucidating chordate origins and evolution. If you are reading this, you are a chordate. More specifically, you are a vertebrate, and a member of the species Homo sapiens. Like other chordates, when you were an embryo you had a dorsal (back) notochord, a supportive rod that is replaced by cartilage or bone in most adult vertebrates. You also had a dorsal nerve cord, which was later modified into your brain and spinal cord. As an embryo, you also had pharyngeal slits and a post-anal tail, the latter of which is now present as the coccyx. These hallmark characteristics (notochord, dorsal nerve cord, gill slits and post-anal tail) are present in all chordates at some point during their life history.

As a chordate, you also belong to a broader category of organisms – deuterostomes (from Greek, meaning “second mouth”). During early stages of your embryonic development, an opening called the blastopore formed – this is the first opening to the primitive digestive tract. In protostomes this opening becomes the mouth, while in deuterostomes it becomes the anus. Another opening develops later to complete the primitive digestive tract (becoming either the anus or mouth). Tunicates, sea urchins and lancelets are also deuterostomes, and we all shared a common ancestor millions of years ago.

My research focuses on hemichordates. This phylum occupies a very special place on the tree of life, and has been rediscovered in the last 15 years due to its immense potential to inform us about the origins and evolution of chordates and deuterostomes.

Hemichordates are marine invertebrates that span a broad range of depths and habitats. Hemichordata (Bateson, 1885) comes from the Greek prefix hemi (“half”) and the Latin chorda (“cord”). The oldest available description of a hemichordate dates back to 1825, when Eschscholtz first described the acorn worm Ptychodera flava. Hemichordates were originally classified as other organisms, including chordates (because their gill slits look like chordate gill slits), but were later placed in their own phylum. As the name suggests, hemichordates share some but not all of the hallmark characteristics of chordates. They don’t have a notochord, but they do have a structure called the stomochord – a flexible tube that somewhat resembles the notochord. They also have gill slits. They are likely the closest extant relatives to the deuterostome ancestor.

Two extant classes of hemichordates exist – enteropneusts and pterobranchs. Enteropneusts are free-living acorn worms, while pterobranchs are colonial organisms. From anterior (head) to posterior (tail), their adult body plan is composed of the proboscis, collar and trunk. Their mouth is located in between their proboscis and collar regions. They also have a heart-kidney complex that is located in the proboscis region. Pterobranchs also have a stalk region, which connects the individuals in the colony. Below is a diagram showing the hemichordate body plan, as well as pictures of hemichordate species found throughout the world.

Figure 2 image source: Rychel, A.L. and Swalla, B.J. (2009) Stem cells and regeneration in hemichordates. In: Marine Stem Cells, Valeria Matranga and Baruch Rinkevich, ed. Springer Publishing. pp. 245-265.

Phylum Hemichordata has been reported to contain only about 100 species, but recent studies of taxonomy and phylogeny suggest that the species number has been hugely underestimated. One problem is that species must be described by experts, and historically few taxonomists have studied this group of marine invertebrates. In our recent work, we provided an overview of our current knowledge of hemichordates, with a special focus on global biodiversity, geographic distribution and taxonomy. Of the 120 living species currently documented, we found the majority (80%) are enteropneusts, with more species descriptions forthcoming. Hemichordates are found throughout the world’s oceans, but the reported number is highest in the temperate north Pacific and Atlantic. Interestingly, these areas also have many marine labs. Many species have also been found in the western Indian Ocean. As new marine habitats are characterized and explored, we anticipate new hemichordate species will continue to be discovered and characterized.

Further, gene expression studies in hemichordates can provide clues on the developmental origins of the gill slits and chordate nervous system. Some hemichordates (such as Ptychodera flava) also have the remarkable ability to regenerate. If they are amputated in half, they can reproducibly regenerate their anterior and posterior regions.

I’m particularly interested in investigating genes involved in signal transduction, and want to determine which genes and signaling pathways are crucial for regeneration in Ptychodera flava. Where are they expressed? Which ones are crucial for regeneration? What are their functions? During development, signal transduction pathways allow for cells to communicate with each other and orchestrating proper patterning in the developing embryo. These developmental signaling pathways also appear to play roles in regeneration in other organisms.

For more information about Charlotte’s work, you can contact her at ckonikof at u dot washington dot edu.

Posted in BEACON Researchers at Work | Tagged , , , | Leave a comment

BEACON Researchers at Work: Bidding Strategy in Learning Classifier Systems Using Loan and Niching GA

This week’s BEACON Researchers at Work post is by North Carolina A&T State University graduate student Abrham Workineh.

Nature has given some degree of inherent intelligence to living things.  One definition of  intelligence is the ability to learn from experience, to adapt to new situations and make proper decisions. Intuitively, a machine is said to be learning whenever it responds to its external environment or an input by adjusting its state to improve its future performance. Humans and living things learn heuristics through experience and use them to solve various problems. But machines have no intuition and hence fail to understand commonsense knowledge. For instance, animals are much better than robots or computers when it comes to recognition and vision. It may be easy for humans to remember a person they saw only once many years ago. But computers and robots, though damn fast in computational stuffs, need thousands of hours of intensive training before they recognize the face of a single person. In fact, several manifestations of this natural intelligence have contributed to the advancement of technology, especially in the areas of machine learning and robotics.  To mention a few: task coordination and collaboration techniques in honey bees, flock of birds, and food foraging in ant colonies are some of the sources of intelligence that have been deeply investigated and customized to machines. The essence behind machine learning is to introduce natural intelligence to machines like robots by designing computer algorithms that evolve through experience to adapt to a new environment. It enables machines to plan, learn from experience and integrate these two aspects with the rest of their behavior. Beyond its engineering relevance however, achievement of learning in machines will also have a biological significance as it helps to understand how animals and humans learn.

When I think of my inclination in this area, I always go back in memory to those old times. Having grown up in a remote area, I did not have even an iota of information about modern technology in my early childhood. I had only nature to ponder and appreciate. My day-to-day encounters included seeing a colony of ants crawling on a straight path, birds coming back to their nest after traveling away for miles, birds forming a “V-shape” while flocking… these were some of my fascinations about nature and somehow they shaped my research inclination towards machine learning. My undergrad senior design project was to develop a fingerprint recognition module using back propagation neural network. I started exploring more about Learning Classifier Systems (LCS) and neural networks during my MS study.  Currently, I am researching on improving the convergence rate in LCS using a modified bidding strategy and niching genetic algorithms.

An LCS is a machine learning system based on reinforcement learning and genetic algorithms (GAs).  It is one type of evolutionary algorithm that utilizes a knowledge base of syntactically simple production rules which can be manipulated by a GA. LCS uses GA as a search engine to discover new rules among a population of candidate rules based on the experience of existing rules. However, the GA used in LCS is different from the one in standalone GA. Consider for instance the problem of function optimization. In a standalone GA, the intention is to find parameter settings which correspond to the extremum of the fitness function.  There is no notion of environmental state and hence the GA structure lacks the condition part of the classifier.  Instead, it only manipulates a set of parameters corresponding to the action part of a classifier.  Thus in the standalone form, GA is a function optimizer that seeks for points of maximum functional value in the search space.  In LCS, it can be perceived as a function approximator.  The reinforcement learning technique determines the rule fitness and enables the system to learn from its environment based on a reward signal that implies the quality of its action.

In a strength-based LCS, auctioning among classifiers that matches to an environmental message has been used as a means to identify winner classifiers. All classifiers participating in an auction issue a bid proportional to their strength, and a winner classifier is allowed to fire and receive a reward or punishment from its environment as a consequence of its action. In this kind of bidding strategy, good classifiers with low strength and little experience have to wait until the strength of low-performing classifiers has come down through continuous taxation.  This slows down the rate of convergence to optimal solution sets. In addition, offspring classifiers that come from weak parents as a result of randomness in the selection process may inherit a small strength as compared to experienced classifiers in the population. A mutation occurring at a point may however make them better match to more environmental inputs. But due to the initial low strength they have to wait for some time till they mature and try their action. To mitigate these shortcomings of the bidding strategy in traditional LCS, loaning and bid history are introduced.  In direct analogy with real auctions, all classifiers matching the current input compare the average bid history with their potential bid based on their current strength.  The average bid history parameter gives general information about the bid market (potential of competent classifiers) and determines the amount of loan a classifier should ask.  The following figure gives a bigger picture of the implemented centralized loaning system.

Learning in an LCS is an ongoing adaption to a partially known environment and not an optimization problem as in most reinforcement learning systems. The learning system consists of six components: the auction, clearing house (CH), reinforcement program (RP), the (GA), the reservoir and the bank.

The Auction is part of the learning system where all classifiers in the match set bid a fixed amount proportion of their strength to win an auction. Here, we have introduced a new modified bidding strategy to the existing LCS implementation where classifiers in the match set can request a loan from a bank based on loan grant criteria.  Once the loan is granted, it can be added to their strength value and used during auctions to issue a bid. This improves a classifier’s potential to win an auction and try its action which may intern result in receiving a reward earlier. In other terms, a classifier gets a chance to take action without waiting until bad classifiers weaken due to continuous taxation. The CH is the part where all classifiers clear out their taxes. The GA discovers new rules among a population of candidate rules based on the experience of existing rules. Each GA operation brings two new classifiers to the existing population classifiers. GA diversifies the population by adding some degree of randomness in picking parents for reproduction. The RP determines the rule’s fitness by generating a signal in the form of a reward or punishment. It basically guides the search to evolve to optimal solutions at last.

The Bank is the loaning agent in the system analogous to auctions in real life. We have introduced a loan and bid history concepts to the traditional LCS implementation. Classifiers in the match set make a decision on whether to take a loan or go ahead by their own. The bid history is a global variable that keeps track of the average bid in the previous auctions.  It serves as a bench mark for classifiers to decide on how much loan to take or bid without any loan.  The requested loan amount is granted when the loan criteria are satisfied.  Finally, the reservoir plays a dual role: avoids financial bankruptcy at times when a classifier’s debt exceeds its strength and subsidizes new classifiers coming from GA.

To further expedite the rate of convergence to optimal solution set, a more compact and robust system representation using a decentralized loaning is also implemented. In this scenario, loan exchange is allowed among classifiers in the population by removing the reservoir and bank components in the centralized approach.  Preliminary results proved the effectiveness of a decentralized loaning technique. Investigation of niching GA and the feasibility of integrating it with a decentralized loaning in LCS is also part of our ongoing research under BEACON’s support.

For more information about Abrham’s work, you can contact him at atworkin at ncat dot edu.

Posted in BEACON Researchers at Work | Tagged , , , | Leave a comment

Final 2011 Congress Schedule

See the final version of the agenda for the 2011 BEACON congress:

PDF File

Posted in About BEACON, Member Announcements | Leave a comment

BEACON Researchers at Work: Microbial communities, huh, yeah! What are they good for?

This week’s BEACON Researchers at Work blog post is by University of Idaho postdoc Mitch Day.

Many labs in BEACON and beyond study microbial communities. There are many ways to approach the problem, but the first is always deciding what fundamental questions you will focus on. Without trying to sound too glib, the main question I focus on is “What is a community”?  You might see this as a simplistic question because the obvious dictionary.com answer is “a collection of species found living together in the same locale.”

When we zoom in to the microbial scale, this definition falls apart because the every-day concept of a species doesn’t apply down there. We commonly think of a species as organisms that can reproduce together sexually. Bacteria don’t have sex, but they are promiscuous. They often exchange small parts of their genomes with each other, sometimes between very unrelated organisms. Microbiologists use the small differences in a single gene needed by all living things (called “16S“) as an imperfect proxy for species. It is easy these days to extract DNA from a microbial community, such as a soil sample, and sequence all the different versions of the 16S gene. We can then sort the different versions of 16S we find into bins based on their similarity to each other. Daniel Beck, my colleague in the lab headed by James Foster, has just published a BioConductor package called OTUbase that makes this kind of data analysis much easier.

This approach has been very useful but it is misleading to think of these operational taxonomic units (OTU) as being the same as species we describe in vertebrate animals. Why? The main reason is the recent discovery of the pangenome. In every bacterial “species” studied so far, environmental isolates that have the exact same 16S sequence can actually have very different genomes. There is a core of genes that is common among all isolates, but each isolate also has a set of genes that are unique to that isolate. Since only a few genes are needed to turn a harmless or helpful bacterium into a deadly pathogen, it shouldn’t be hard to convince you that there are serious limitations to studying microbial communities using only the 16S approach.

So how would we answer our own question as asked at the beginning? For us, a microbial community is a single discrete unit that produces an effect on the environment. You can even call us “species agnostics” because we are exploring ways to study communities that don’t depend on the concept of species at all. How do you do this? One approach is to treat a 16S data set as if it were a collection of genes in a single organism.  In genetics, epistasis is the term used to describe the effect of the interaction among a large number of different genes.

Daniel Beck is exploring different ways to detect epistasis among different OTU. He’s even throwing evolution at the problem. He is using a large data set gathered from the microbial communities found in patients suffering from bacterial vaginosis (BV) and from those without BV. This data includes the OTU composition of each individual community and some information about the health and behavior of the host human. Daniel will test classical analyses and new evolutionary algorithms to see which is better at predicting whether a given patient has BV from just the 16S composition.

Our “agnosticism” can be taken even further. My own research treats microbial communities as experimental units. What does that mean? My main experimental approach right now is determining the impact of artificial ecosystem selection on microbial communities. Artificial ecosystem selection is simply an extension of the concept of artificial group selection to include groups composed of a vast number of different populations. Even though biologists still debate the role of group selection in nature, it most definitely works in the lab.

I am performing experiments on a number of different communities to answer a few different questions that follow from the idea of a community as a discrete unit. The first is obviously “Do microbial communities respond to artificial selection in the lab?” Some early results show that they can. According to the essential definition of evolution, they certainly should be able to. Of course, we are scientists so we will do the experiments and find out for sure.

We have finished an artificial ecosystem selection experiment using a natural consortium of bacteria and yeasts as our community. This community is called the Ginger-beer “plant” because it has been used to make fizzy, low-alcohol beverages historically. It takes the form of durable gelatinous particles that float around in the sugary medium. These granules vary in size up to about the size of a small grape. The traditional way of creating the beverage uses a kitchen sieve to catch the particles and separate them from the foamy drink. In essence, the Ginger-beer “plant” has already been artificially selected by humans for a long time. It has been selected for particles that are big enough to catch in the sieve. The losers go down the drain to their separate fates.

We extended this long history of selection by using a set of sieves of increasingly larger mesh-size instead of a homely kitchen strainer. Bryanna Larraea performed most of the hard work in this experiment. She grew 15 jars of Ginger-beer “plant” for 9 weeks. Every week she would sieve each jar separately, weigh out the particles of each size-class and then select larger or smaller particles to found the next “generation.” Our experimental design has introduced some tricky statistical questions, but fortunately, we are collaborating with a statistician, Wade Copeland, who actually enjoys hanging out around biologists. Wade has already made a huge contribution to the study.

The rationale behind this experiment is the same as any artificial selection experiment. Artificial selection for animal husbandry formed the basis of modern genetics that came long before we knew anything much about DNA. The patterns of inheritance gave strong clues as to the genetic basis of physical traits. If we can show a response to selection on a community trait, we can analyze our archived samples to find patterns in either the 16S gene variations or in the metagenomic sequences. We think our ‘top-down’ study of natural, captive microbial communities is a necessary complement to other lab approaches that build artificial communities in the lab from the bottom up.

For more information on Mitch’s work, you can contact him at mday at uidaho dot edu.

Posted in BEACON Researchers at Work | Tagged , , , , , , | Leave a comment

BEACON Researchers at Work: Engineering solutions inspired by fish schooling

This week’s BEACON Researchers at Work blog post is by NC A&T graduate student Patrick Wanko.

Consider today’s car, with its extensive sensors, diagnostics, processing and data storage, and communication capabilities. A far cry from the highly mechanized vehicles of 25 years ago, the functionality and maintenance of such vehicles is heavily inspired by biological organisms. This functionality is being extended beyond the vehicle level to subassemblies and parts – particularly with high value parts for users such as the Department of Defense. These users anticipate using AIT (Automated Identification Technology) with emerging capabilities in data storage, processing, sensor integration, communication, and even energy scavenging.  As a result, life cycle management of such parts might be inspired by life cycle process of biological/sociological systems.

Generally, life cycle management for a durable product or part consists of selection of design and operational parameters in a manner to maximize multiple objectives related to performance (functionality, availability and cost). Design parameters are set once at the creation of the product and its subsequent versions. Operating parameters may be changed at any point in the life of the part. Both the minimization of parameter variability and optimization of life cycle performance in given environments are desired. Life cycle performance is driven by design, operating, and environmental parameters with control over the first two. As a result, the domain space for solutions is highly complex and nonlinear, implying metaheuristic search techniques. Optimization of these parameters often involves formation of groups based primarily on operating environment. Life cycle improvement can be made in a decentralized manner by coordination within and across such groups.

As a result, we are looking at ways to modify evolutionary metaheuristic techniques to help improve life cycle management. This research might inform parameter selection approaches for physical systems or assist with virtual life cycle optimization when a performance simulator is available. Two initial concepts that have been developed are modifications to traditional evolutionary metasearch inspired by fish schooling and social networking. In both cases, the intent is not to mimic precisely such behavior. Rather, the intent is to incorporate relevant dynamics into the evolutionary algorithm to provide faster, better solutions for the life cycle management problem – in terms of real-time adjustment of operating parameters and intergenerational adjustment of design parameters.  

My research is about what I call Schooling Genetic Algorithm or SGA. It is an enhanced evolutionary algorithm in which the population, divided into tightly formed schools, appears to dynamically search for (individual birth/death gives appearance of school population movement) and eat food within the sea (solution domain), while avoiding, by escaping means, predators.

Photo from http://photography.nationalgeographic.com/photography/photos/schools-fish/

The SGA can be started with individual fish with a bias towards forming schools or pre-formed schools. Once schools are formed, algorithmic parameters (crossover/mutation operators and numbers, selection processes) and to some extent algorithmic processes are influenced by the state of the school. For example, when an attractive location is reached (feeding zone), the emphasis on coordinated movement is lessened and on individual search for good locations is increased. When the food in that area is depleted, the fish need to move to a new location to feed – we can think of the depleted area as having a “penalty function” after a period of feeding that will drive the fish away. When an unattractive area is reached (one with low food availability or high risk of predation), the school shifts to predator avoidance mode which involves faster overall group velocity. Both states might result in bifurcation or convergence of the schools. I would be glad to provide those who are interested with the details about how we implement these concepts.

Initial coding of the system is complete with the dynamic behaviors validated.  Now we are looking at solution of sample product life cycle problems and comparison with more traditional approaches. As the work on SGA progresses, so many questions come to mind such as: what is the ideal size of a school with respect to foraging? How can predator avoidance better be performed? Hmm…All fascinating questions – but these are topics for another BEACON post.

For more information about Patrick’s work, please contact him at patrick dot wanko at gmail dot com.

 

Posted in BEACON Researchers at Work | Tagged , , , , | Leave a comment

BEACON's Titus Brown on "Coding your way out of a problem"

In the current issue of Nature Methods, “Coding your way out of a problem” by Jeffrey M. Perkel features advice for biologists from BEACON MSU assistant professor C. Titus Brown under “Advice from the Pros.” Some highlights:

  • Do not be afraid. A computer is just a tool, so do not be afraid of it, says C. Titus Brown, a Michigan State University researcher who teaches programming for biologists.
  • Treat computation like an experiment. “Don’t trust what the computer spits out just because the computer spits it out,” says Brown. Instead, run a search that should yield a negative answer or vary parameters to see how the answer changes when it should not. “Do you get the same answer? If you do, that means it’s more robust.”
  • Do not reinvent the wheel. Quite a bit of programming, says Brown, is “rote, cookbook-style stuff.” There is plenty of free code available, just copy and paste; the trick is tweaking it. “One of the most important skills to learn is how to just take the plethora of open-source solutions out there and apply them to your own problem.”

Read the whole article!

Posted in BEACONites | Tagged | Leave a comment

NPR Science Friday: Emily Jane McTavish talks about the evolutionary history of Texas longhorns

BEACON University of Texas at Austin graduate student Emily Jane McTavish was interviewed on NPR’s Science Friday. Listen here!

Posted in BEACON in the News | Leave a comment

BEACON Researchers at Work: Developing artificial intelligence

This week’s BEACON Researchers at Work blog post is by MSU postdoc Arend Hintze.

When I am asked what I do, I normally smile apologetically and say something like “Theoretical Biology” or “Computational Biology,” and with a wink of my eye “like biological evolution … only in the computer” followed by a hand waving gesture that looks like me typing. At least that is my way of coping with the problem of explaining what it is that gets me excited, and most people slowly nod their head and respond with an encouraging voice: “In the computer, I see!” and in most cases we change the topic.

But the question remains: What do I do? And even though this question is absolutely clear to me, and I never ask this myself, it remains hard to communicate. So let’s give it a shot.

I am fascinated by artificial intelligence or consciousness and the very idea that we could build a computer that thinks! So much that I imagine myself having a dialog with the computer, chatting about the meaning of life, and making a lame joke about how it comes that the very idea of consciousness is apparently only an illusion created by a very complex machine.

When I was a kid I had one of the first home computers, and immediately tried to code an AI. Over many years I made many different attempts at that, only to learn at last that programming such a computer brain is impossible, and will most likely remain impossible to do for any human. And at the same time I see intelligent beings around me, and I know that they came about without help, and without a master plan, only by evolution. Kind of disappointed about computer science, I started to study Biology, apparently the only science dealing with evolution, the one driving force that ever achieved what I was looking for. I got my PhD in genetics and developmental biology studying the complex system by which a nematode egg becomes a tiny worm… only to decide afterwards that evolution can, in my opinion, not be sufficiently studied in real life, but only in computer simulations (sorry, Rich, what you are doing is great!). So I turned to artificial life and artificial evolution, where I developed the “artificial cell model”: a computer simulation that is capable of evolving the metabolism of cells, building complex molecules from simple precursors using a metabolic network of interacting catalytic enzymes. One of the main concepts that I developed was how to translate a genome into a complex network, and how to analyze these networks.

At the same time I was intrigued by how brains make decisions and what might influence their outcome. So I also worked on evolutionary game theory, where I let virtual agents play simple games like the well-known Prisoner’s Dilemma, for example. The key is to use a set of probabilities that determine the strategy of a player and let these probabilities evolve. I learned a lot about evolutionary principles, population dynamics, and cooperation. But the key concept here is that if agents have information about each other, for example by communication, they can evolve strategies that are better in the long run than those that thrive for short term benefits. I have to point that even though you might have read something about evolutionary game theory, most people actually screwed up the evolutionary part of it. The take home message here is: If you read evolution, make sure it means: inheritance, variation, and natural selection. Everything else is just not it! I had a very disappointing month reading myself through 30 years of “evolutionary” game theory literature.

Anyway, here is an interesting question: “Why are there no robotic doctors, self-driving cars, or computers that write books yet?” After all, we are more than 40 years into micro-electronics, and the founding fathers of computer science like Alan Turing were already interested in intelligence. Apparently we are doing something wrong. The two major concepts in AI either try to build an insanely large database on top of a complex algorithm that “knows” how to deal with it (IBM’s Watson comes to mind) or we use some form of neuronal network that is great when it comes to discriminating between different situations, but has to be taught from the outside to explicitly solve one and only one task.

So we are currently developing NEUERA, an evolvable system that uses lots and lots of small probabilistic logic gates, like a computer only that it uses probabilities instead of ones and zeros. These brains are designed to be evolvable, and at the same time are known to be able to solve any computational task. This system fundamentally shifted my work from designing an AI into building worlds that are conducive to evolving intelligent behavior. We have been quite successful with this project – we have robots that navigate something like a maze, learn the difference between large and small objects just to catch the small ones and avoid the large ones, or robots that cooperate in pushing blocks around and help each other find the right places for that. We are embedding these brains in mobile rovers that should seek moving objects and flee from larger “predators.”

All of this is just a beginning, and we are very thankful to have ICER as our partner for doing all these very computationally intensive things!

I also work with Prof. Titus Brown developing a web portal that should allow biologists to rent cloud computation resources to solve their bioinformatic problems mainly in nextGen sequencing and genomics. In brief: I learn how to use more and more computers at the same time.

I doubt that the above sufficiently explains what I am doing, but at least you should understand why I use “computational biology” as a weak abbreviation for “learning everything about evolution and computer science to evolve conscious machines, while at the same time understanding evolutionary game theory, network theory, complex systems, and systems biology, because all of that is necessary to achieve my goals.”

For more information about Arend’s work, you can contact him at ahintze at mac dot com.

Posted in BEACON Researchers at Work | Tagged , , , , | Leave a comment

Kay Holekamp blogs about hyenas at the New York Times

BEACON PI Kay Holekamp is writing for the New York Times’ Scientist at Work blog this summer about her fieldwork in Kenya.

Read her first post here, and click here to keep up with all of her fascinating entries!

Posted in BEACON in the News, BEACON Researchers at Work | Leave a comment

BEACON Researchers at Work: The "Mating" Game

This week’s BEACON Researchers at Work blog post is by MSU graduate student Emily Weigel.

Working with radioactive fish eggs as an undergrad wasn’t as cool as sticklebacks are now.

What would a fish say if it could talk? How about, “Hey, baby. What’s your sign?” Male threespine sticklebacks court females in a constant game of flashy zig-zag dances and showing off with the hope that a female will respond favorably. Most of the time, to use the baseball analogy, the male strikes out; that is, the female swims hurriedly away, and the male must go sadly back to the nest alone tonight. On occasion, however, the male is able to get to first base, and perhaps even hit a home run. So, what makes a male attractive, or “sexy”?  Does a male’s sex appeal depend on where he is or what time of year it is? Do all females agree on what makes a sexy male?

I study the mating behavior of threespine sticklebacks. Threespine sticklebacks (Gasterosteus aculeatus aculeatus) are small fish found throughout the Northern hemisphere.  About 10,000 years ago, at the end of the last Ice Age, marine sticklebacks invaded recently formed freshwater lakes and streams. Since then, these fish have changed through time in similar and often predictable ways—that is, they’ve evolved in parallel across several lakes, making them an ideal species to study for evolution in action.

In just a few lakes in Canada, two closely-related stickleback species are found— benthics and limnetics. Benthic and limnetic sticklebacks differ extensively in many ways, including size, shape, color, and feeding habits, yet their genetics reveal that the species pair evolved quite recently from a common ancestor. Although these species live in the same lakes and come into regular contact with one another, they manage not to interbreed. It turns out females are really good at finding males of her own species. Females use species-specific traits to identify the right species males, and to choose the best of those males, the females rely on sexual signals to decide the male’s “sexiness.” Often, the brighter, bigger, or stronger a male is, the sexier he is considered.

Now here’s the problem: in one lake (Enos Lake, British Columbia), a well-established stickleback species pair recently collapsed into a hybrid swarm in less than 20 generations. What gives? Why did the females stop choosing to mate only with males of their own species? What could have caused this? It seems scientists don’t know yet how quickly this type of loss can occur, under what circumstances (environmental change, low mate availability, etc.), and whether trait or preference loss (or a relaxation of associated requirements) occurs first. Let’s try to find out.

The Mating Game: Female acceptance behaviors in response to a line-up of ever-sexier males.

One of the big determinants of behavior is often experience, so that seems a practical starting point. Experience can be broadly defined to include social experience (i.e., mate availability, population density, mating history), age or seasonality; variation in these factors can lead to local adaptation, population divergence, and even the evolution of new species. Working with Dr. Robin Tinghitella and the BEACON Sexual Signaling Group, we designed an experiment to test a female’s mate choices throughout the breeding season. Wild-caught fish were uniquely marked and assigned to one of two “experience” treatments that differ in sex ratio (mimicking early and late breeding season conditions; male-biased and female-biased, respectively).  We’ve since measured female preferences and male mating behavior every time the females reached reproductive condition. To return to the Baseball analogy, we can assign a score — or a base — to every act the female commits to show she’s interested. So first base might be approaching the male, second following him, third is examining his nest, and, the home run, entering the male’s nest to deposit eggs. We then test a female with first a dull, then a medium, then a bright male and see who along this “sexiness continuum” the female chooses to accept. That way we can tell if a female is relatively choosy or relatively relaxed and whether those preferences change over time.

We expect treatment females who have experience with few males to be less choosy (essentially, to let more males run the bases and score), and this makes logical sense: better to mate with someone than no one at all. On the other hand, treatment females who regularly see many males will be choosier (thus males will strike out or be called out more). Because the tests with each male occur at several time points throughout the season, we also expect females to relax their mating requirements as the season goes on, such that the females maximize the number of offspring they can have. The theory is, if females with relaxed mating requirements sometimes accept males who lack the preferred sexual signal, loss of male signaling traits could occur. Once male signaling traits or the associated preferences are lost, the species pair could interbreed, leading to the collapse and loss of species.

The question of why females accept certain males is something in which I’ve always been interested. Since starting graduate school at MSU in the Fall of 2010, I’ve had the pleasure of working and collaborating on empirical and theoretical experiments using sticklebacks, mathematical modeling, and even the digital organisms in Avida. Through these avenues I’ve been better able to understand how mates are chosen in systems and populations with sexual signals, and the evolutionary consequences of what happens when signals are lost.

So, guys, the females are ready. Batter up!

For more information on Emily’s work, you can contact her at weigelem at msu dot edu.

This post has been submitted to the National Evolutionary Synthesis Center (NESCent) travel award contest to attend ScienceOnline2012.

Posted in BEACON Researchers at Work | Tagged , , , , , | Leave a comment