Evolutionary excursion into the depth of the human psyche?

This post is by MSU postdoc Arend Hintze.

Ralph Hertwig, director of Max Planck Institute for Human Development

Ralph Hertwig, director of Max Planck Institute for Human Development

Let me tell you about my excursion to the Max-Planck-Institut für Bildungsforschung (Max Planck Institute for Human Development) in Berlin. I met the director Ralph Hertwig a while ago interviewing for a job, and while we quickly figured out that our disciplines are very different from each other, we still had the feeling that we would benefit from each others’ perspective. My trip now was meant to explore these perspectives. The group I stayed with calls itself ARC which is the scrambled three letter acronym for “Center for Adaptive Rationality” – I was told that many people are mistaken when they assume one has to abbreviate that with CAR. The group primarily consists of Psychologists, but they also have Neuroscientists, Philosophers, and Biologists, which already shows their interest in interdisciplinary work, but they are also very interested in evolutionary models and questions regarding the origin and evolution of adaptive rationality.

I think that “Adaptive Rationality” already sounds cool, and it seems to be very similar to the goal we are after when saying that we would like to evolve artificial intelligence or artificial consciousness. But as you will see, while these topics are closely related, the two approaches could not be more different. Let me elaborate on this matter:

Here at BEACON we study evolution in action, and our core collaboration is between computer science and biology. When we look at an organism and ponder about its peculiar behaviors, it doesn’t take as long until we ask how evolution was involved, which selection pressures were responsible, how we could use that in an application, how this behavior is different to the one we see in our model organism, or, in my case, how I could model and evolve these behaviors – we think that nothing in Biology makes sense unless seen in the light of evolution. Psychology is – how should I put this – shy when it comes to evolution and human behavior. And there is a very good reason. Everything that makes us humans different and unique has in one way or the other to do with intelligence. That means that the selection pressures and evolutionary processes that made us, and allowed our intelligence to emerge had no precedent, happened once, and can therefore not be understood as a general principle. It is the same problem we have when we want to explain the emergence of life itself. It happened once, and in order to make generalizations we need more than one example.

But there is more. One of the main arguments against any evolutionary explanation is the following: The part of your brain we now use to solve this or that problem could have evolved to do something completely else. I call this the “opportunistic repurposing argument” (abbreviated RAO, in case you paid attention earlier). I obviously have difficulties with this categorical denial. After all, Occam’s razor suggests that every time you use the “opportunistic repurposing argument” as an excuse why the behavior you study in humans can not be explained by evolution, you introduce an second additional hypothesis. The first hypothesis says that the mechanism evolved for something, and the second one claims that such a mechanism can now be used for something else. While you can never disprove this argument in any specific case, clearly you can not use this argument every time. In most cases we probably use a cognitive mechanism in a way it was evolved to function, and only in few cases we repurposed something – still the counterargument stands.

This idea, however, has wide ranging implications for our endeavor to evolve artificial (human-like) intelligence. If we more or less concluded that we will not unveil the exact circumstances that lead to the evolution of human intelligence, how should we be able to construct and model fitness landscapes that are conducive for the evolution of artificial intelligence?

All of the above is not something I learned in Berlin; I was aware of these difficulties beforehand. You could either call defeat or see this as a challenge – challenge accepted!

As it turns out, you can do a lot of things, and I promise to write about the projects we worked on once they are published. But let me give you an impression of the things I did, and that were meaningful to both fields (in my opinion). Human behavior often seems to be irrational and humans very often don’t make the choice that pure rationality (Nash Equilibrium) dictates, and here two fields of science clash. One is Economics, which tries to explain how the choice depends on economic (selfish) considerations, and the other is Psychology, which tries to find cognitive causes like risk aversion, curiosity, cause of habit, or preferences that keep us from making the “right” choice. This contrast gives us an ideal angle to add our evolutionary opinion to this debate. We find that evolution can provide an additional opinion, and show where the economic models fall short, or where psychological influences are justified and can explain how choice preferences emerge or are maintained.

In summary, I think that from all the interdisciplinary work I did so far, this was the biggest straddle. Evolutionary game theory and modeling oversimplifies, and the abstractions used might work with animal behavior but not necessarily with more complex human behavior. My angle appears to be primitive at times in a field where nothing is simple. It seems that every facet of human behavior has been studied, and a plethora of possible explanations exists, which all rival each other. Decisive experiments are hard to do, and the context and framing is way more important than I am used to, but that makes contributions also much more valuable. I have the impression that psychologists are well aware that evolution matters and they appreciate the input other fields provide, but this is a tricky endeavor and has to be done right. At the same time, my exploration into this field made me aware how far artificial intelligence is from grasping the foundations of human cognitive processes, and how difficult it will be to evolve artificial intelligence – but again, we are not doing what we are doing because it is simple.

Cheers, Arend.

About Danielle Whittaker

Danielle J. Whittaker, Ph.D. Managing Director of BEACON
This entry was posted in Notes from the Field and tagged , , , , . Bookmark the permalink.

Comments are closed.