Altruism in a gene

Refer to previous post on my uninfluenced opinion of morality a long time before I read this book, Neil Levy’s “What makes us moral?

Ok, now after having read Neil Levy’s book, I realise I may have been on the right track previously, but never really understood the mechanism until now. Enter Neil Levy. Initially, his credentials as a ethicist put me off, but by the time I was done with his foreword, I realised he wasn’t going to be one of those stem-cell-research-is-wrong-and-unthinkable-because-I-say-so kinda guys. Perfect.

One of the most significant clarifications his book made [for me] was actually giving a valid term for what I commonly refer to as the caveman scenario. He calls it EEA [environment of evolutionary adaptation]. Roughly the first quarter of the book discusses the possibility of altruism being a strictly biological phenotype, and logically concludes it may not be the case.

Case in point: Assuming altruism is expressed through a gene [I’d like to think of it as recessive], and he asks us to imagine two towns A & B.
A has 100 selfish people. Since they only take care of their own skins, there is no communal benefit of living together, so they only have 2 offspring in each generation.
B whereas, starts out with 70 altruistic people, and 30 selfish ones [there’s no perfect society]. The selfish people in the altruistic society benefit [freeload/free-ride] a lot and they get to have up to 3 offspring/gen. The altruists have 2.5 offspring, more than the selfish people in A, but less than the selfish people in B because they are the ones taking the risks.

Over two generations, the difference in populations is easy to see, with B having a clear lead. Neil Levy notes that while the population increases, the proportion of altruistic people is going down. And he instantly extrapolates it to zero. I thought that part was wrong. The proportion will go down but never hit zero in any simulation.

Firstly, it won’t because, if the birth rate stays the same, the proportion goes down, but the altruist gene is definitely always around.

Secondly, since he started the simulation with different birthrates because of proportion of altruists, isn’t it reasonable to assume that the birth rates are variable in every consecutive generation, begging for the use of a differential formula to define the specific birth-rate ratio?

Moreover, in this iterative simulation, the limits MUST be similar to resource limits. That is to say, there’s no reason why town A MUST have 2 offspring/gen in each family, unless there are limited resources and/or dangers that they cannot fight against without altruism to “unite” them. if so, then the same resource limit or danger abundance will also dictate the maximum size of B w.r.t to the proportion of altruists.

On top of that, what’s to say altruists and selfish people will not intermarry? Then it would depend on whether altruism is a recessive allele or a dominant one.

I think this is a major point, especially because Neil Levy uses this as one of the reasons why altruism is probably not a gene, but more of a evolutionary psychological adaptation. I don’t think altruism is a gene either, but this is not a good enough example to support it.

He brings up the examples of Kant and Hume, and how their theories defined morality as something objective, a natural function of rationality, unconditionally binding and intrinsically motivating.

For a minute here, he raised the question how something so methodically designed be a product of natural cause and I was shocked. I didn’t know what to think. What was this guy trying to say? He believes that evolution has a chance [pun intended] at credibility but some things MUST be a product of intelligent design? A bit later, I understood he was actually not agreeing with the concept of an objective moral code at all. Joy. My sentiments exactly.

Morality as a psychological adaptation. Neil Levy himself claims to be speculative when coming to this conclusion and yet I think it is one of the most logical hypotheses about morality that is consistent with evolution.

He introduces in the manner of game theory [[the classic prisoner’s dilemma with reward and punishment more suited to everyday life], the basic game plans possible.
I’m assuming this particular scenario and end-results are familiar to you, if not before, then at least now.
So overall, for rational agents, the best course of action is to defect and hope for the best. Either way, it will be a lesser sentence than the maximum sentence and that’s a win, in a manner of speaking. However. life isn’t like a one-shot game. So there must be iterations. The problems with iterations is that if the number of iterations is defined, then it becomes an arcade game with fixed amount of lives and you get to keep your points despite losing lives. In essence it remains a one-shot game, where each iteration before the very last iteration functions more and more like a one-shot game. So what if you do not define the number of iterations but you do stop it from time to time in a random manner to analyse the gameplay. It is found that most agents in these games function with a tit-for-tat [he called it TFT] mentality, effectively mirroring what the other did in the previous round. Eventually both tend to co-operate for the greatest benefit.

So in a predominantly altruistic society or a predominantly selfish society, you couldn’t tell apart the TFT from altruists or selfish people respectively. However the overall dynamics of scoial interaction is vastly different in a normal society with incidences of all three traits.

Here’s where this all gets interesting. The selfish people in a normal society will really suffer [suffer in terms of not being in a situation good enough to reproduce and pass on their genes], especially at the hands of the [I think] dominant TFTs. So the selfish people in a functional society must evolve to free-ride better.
That is, they must be able to convince others of their contribution while not really contributing. Furthermore, if caught, they need to convince others of their remorse so as to get a less harsh sentence.

What better way to lie than if you’re genuinely convinced you’re telling the truth? That is key. Self-deception. Guilt is self-deception. You feel bad, so you can tell others you feel bad and mean it, and be excused or punished less. Well it started out like that.
What happened along the way is the forming of the seemingly concrete tenets of the moral code, as if if were objective. Why? To feel bad would imply you should have done something good, meaning you believe there are some actions that are unconditionally good [whether it is true or not] and THAT is the moral code that we swear by today. How very elegant!

And very very scary! We’re not the moral creatures we think we are. What’s more, in the EEA, we picked up many different sensibilities, which by now comes pre-installed in our brains in the form of subconscious preferences. Most men across many cultures prefer a waist hip ratio of 0.7 [as ancestors with these preferences tended to have successful offspring?] for example. Does it mean even in a multi-gender society, most of our interactions are strictly sexually fueled, subconsciously or otherwise? Are we just looking for potential mates to pass on the seeds of the new generation? That’s gross and indecent. Most of us are hypocrites?

Of course, even after all this speculation in the field of evolutionary psychology, Neil Levy justifies all this study as an explanation for the origin that should not affect the reality and implications of morality as we know it today. Here, I disagree. I think it changes everything!

We’re not dutifully bound to do anything then. If we think we are, then is that millions/thousands/hundreds of years of evolutionary psychological conditioning speaking, or a similar cultural mechanism speaking? So is desire the default behaviour [sugar is sweet because high-energy foods were necessary for our survival. Extrapolate.], and if we deny our impulses, does that make us more human or is it a show of hypocrisy?

Questions, questions.

Advertisements

About this entry