Thursday, September 13, 2007

Some thoughts on morality

Introduction

Much has been written on the subject of morality and ethics. From Aristotle in his Nicomachean Ethics, through Kant and Mill, to an abundance of contemporary philosophers (e.g. Singer and Rawls) – many have considered this thorny issue. And, let’s not forget the theists, who often seem to believe that they have a monopoly on the subject. It is not my intention in this post to discuss these writings in any detail, nor to endeavour to build a complete system of morality from the ground up. Rather, I would just like to give a few of my own thoughts on the topic, from a naturalistic perspective.

Roots of Morality

I believe that the fundamentals of our morality evolved by Darwinian Natural Selection. They were not decreed by some divine lawgiver, nor are they somehow fundamental properties of the universe in the way that physical laws seem to be. Such concepts as good, evil, compassion, justice, love, hate and so on are purely human concepts (although other animals likely feel some sorts of emotions). They are not somehow built into the fabric of the universe, and do not exist independently of us in some absolute or Platonic sense. On the contrary, I think that all the evidence suggests that the universe is entirely indifferent to the existence, actions, and fate of human beings and our petty concerns.


The Religious Explanation

Of course, Christians, Jews, and Muslims would say that morality is handed down by their god. I have discussed religious morality elsewhere, so I will not dwell on that subject here, but there are some obvious objections to this suggestion:

  1. This presupposes the existence of their god. However, I consider the existence of God to be extremely unlikely. I can’t say conclusively that God doesn’t exist, and certainly couldn’t prove such a claim. Nevertheless, based upon evidence and reason, inference to the best explanation leads me inexorably to the conclusion that the Judeo-Christian God almost certainly doesn’t exist. Hence, I cannot accept moral rules for no reason other than that they are supposed to emanate from such a divine entity.
  2. Even if we were to grant the existence of God, we are led to ask questions along the lines of Plato’s famous Euthyphro dilemma. Is what is moral commanded by God because it is moral, or is it moral because it is commanded by God? If the former, then we can help ourselves to this morality without reference to God at all. If the latter, then God could have commanded murder to be morally good, for example, and we should have to obey that rule - since it comes from God. To say that this problem is illusory, as God would only ever command good is just begging the question.
  3. The God of the Bible is shown to be a jealous, bloodthirsty, cruel, and vindictive megalomaniac. If such an entity exists, I see no reason why one should wish to obey its moral strictures – other than out of self-preservation due to a fear of eternal damnation.
  4. The moral and ethical guidelines in the Bible (and Koran) are often contradictory, and are open to multiple interpretations. So, how are we to determine exactly what morality is espoused therein? Moreover, why were such supposedly important messages for humanity not communicated in a clear and unambiguous fashion - one that is not open to multiple and often conflicting interpretations?
  5. Of the moral guidelines that are more clearly expressed in the Bible, many deal with matters of seeming unimportance – not eating shellfish, not wearing clothes of mixed fibres etc. Others are cruel and disproportionate in that they specify a penalty of death for such supposed transgressions as blasphemy, picking up sticks on the Sabbath, being a witch, talking back to one’s parents. It is clear from the Bible that God’s most important message to mankind is that we should worship Him (in very specific ways), and no other gods. Of course, the penalty for not doing this is death (and, while we’re on this subject, why is it that God needs to be worshipped constantly? Is He really so insecure or such an egoist?). Let’s not forget those twin concepts of Heaven and Hell, as introduced by Jesus. Now one can be punished eternally for not following some arbitrary religious guideline or other, rather than just being put to death for it.
  6. It is worth considering who is the more moral: the Christian who does good in order to go to Heaven rather than Hell, or the atheist who does good out of altruistic motivations alone? Kantian ethics suffers from a similar failing, in my view. The more moral person is not the one who blindly obeys rules out of a sense of duty, but who freely chooses to be good out of compassion towards their fellow human beings (and other animals), and a desire to bring more happiness and less suffering to the world.

What morality is: the evolutionary explanation

I think that we have an innate sense of morality, and that this is derived from our evolutionary heritage. Our cultures and intellects have given us additional moral drivers, and modified existing ones, but I think that the roots of our morality are evolutionary. My position on this is broadly in line with that of sociobiology. When our primitive ancestors started to form groups, their chances of survival were increased by acting in certain cooperative ways, and decreased by acting in others. Clearly, those who acted in those ways conducive to survival were more likely have offspring and to pass on their genes. Hence, tendencies towards these actions were selected for by evolution.

For example, the so-called Golden Rule clearly has a survival advantage for each member in a group if all group members adhere to it. In fact, it has been demonstrated in game theory that the principle that gives the best result for individual group members is one in which a member will initially cooperate with another member, but will henceforth copy the last action of the other member in a tit-for-tat fashion – either cooperating or not. The theory behind this is known as the Prisoner’s Dilemma. Lo and behold, examples of this type of behaviour are indeed witnessed in nature, and can quite clearly be seen to be precursors to our own morality.

I am not a Social Darwinist, so I don’t advocate blind adherence to the morality that we evolved naturally. This is sometimes known as the is-ought problem. That there are evolutionary explanations for the fundamentals of our morality doesn’t mean that we shouldn’t attempt to improve upon this natural morality by applying reason to the matter. However, I think that considering the roots of morality is instructive, as it shows that morality does not exist as some abstract ideal, as part of the fabric of the universe, with rules that are intrinsically good and that we have an absolute duty to follow (in the way that the religious might suppose, and as Kant reasoned). Rather, the roots of morality have a much more mundane biological explanation.
At its core, most of our morality was originally fundamentally selfish. We evolved such traits as altruism and compassion because, in a group situation, these increased the chances of us passing on our own genes (or, possibly, of our close relatives passing on their genes). We know what it is like to feel compassion towards others, and now rationalise it in terms of preventing suffering in others, but that is not the origin of the compassionate urge. In a group, helping others when they are in need is likely to be repaid by them helping you when you are in need. Hence, you benefit more than you would if you do not help others in the first place. This also explains why we feel greater feelings of compassion and altruism towards those who are amongst our local group, rather than towards than those who are more removed from us – as, historically, these people were more liable to help us in return.

Arguably, this is a moral standard that we can certainly improve upon, by increasing the consideration that we give to those who cannot help us in return - such as those in third world countries, and those not yet born (e.g. by looking after the environment).

Now, at this point I should address a few potential criticisms of what I have just said. I am not maintaining that altruism is illusory, or that people in general make conscious choices to be altruistic in order to give themselves some benefit e.g. happiness or relief from feelings of guilt. Rather, I am saying that altruism does exist, and that people have an urge towards it, but that this urge is subconscious. They are not generally engaging in some conscious cost/benefit exercise to determine what’s in it for them, and of course people now make intellectual decisions to be altruistic as they reason that it improves the lot of humanity. However, the root of the innate altruistic urge is intrinsically selfish, in that it maximises the chances of us passing on our own genes. So, this urge is programmed into us in the same way that love is, for example.

Some (particularly religious) people would maintain that love exists in some ideal Platonic sense, but I would contend that it evolved as a biological mechanism to encourage long-term mating behaviour (which increased the chances of our offspring surviving). Of course, as we are intelligent social beings, we have explored and expressed these biological urges by creating great works of art, poetry, music and so on but, at their root, these feelings are much more primal. Nevertheless, love does exist, and it is not something that we have conscious control over.
Another objection might be that genes cannot be responsible for specific behavioural examples – helping somebody across a road, being particularly attracted to film stars etc. However, this is to fail to understand Darwinian Theory. Evolutionary advantage is gained by altruism in general, and by being attracted to mating with successful, talented people. The general principles explain the specific examples. Nor am I making some simplistic statement that our behaviour is solely controlled by our genes. Clearly, both genes and environment will have an impact on how we behave. Nevertheless, I believe that our genes have had a far more significant impact on the development of our morality than many people would credit.

Another point worth mentioning is that our intellects and the cultural and ideological aspects of our societies have clearly had an impact upon our personal and societal morality, and there are now clear differences in apparent morality between societies and groups within societies. However, this doesn’t change the fact that the evolutionary roots of morality predate any changes due to intellectual, cultural, or ideological drivers.

Some people are very uncomfortable with the sociobiological explanation of our behaviour - pointing to the dangers of eugenics and social Darwinism. However, these quite different viewpoints should not be confused. As I said earlier, I am not stating that our morality now should be based solely upon that which developed naturally. On the contrary, I believe that there is plenty of room for improvement. What I am instead doing is giving a partial explanation for what is observed to be the case. Furthermore, the fact that what I am saying may offend the sensibilities of some people (due to them mistakenly conflating my views with those of the eugenicist) has no bearing whatsoever on its truth or otherwise. In other words, our evolutionary heritage helps to explain what our moral tendency is, but it doesn’t deal with what it ought to be.

The problem with relying on innate morality

We might try to justify moral rules based upon our moral intuition, as Louis Pojman seems to do regarding not torturing others for fun (in his essay Ethical Relativism versus Ethical Objectivism). However, this seems to be a dead end to me. Firstly, as I have said, I think that our basic moral intuitions are a result of evolutionary drivers and, as such, they were those that gave us the greatest chance of reproducing in a group environment. So, all that Pojman seems really to be saying is that torturing others for fun goes against the kind of moral intuitions that evolved to give humans the greatest chances of survival and reproduction in a group. This might well tell us something useful about morality, but I think it comes nowhere near to standing as a moral rule in need of no further justification.

Secondly, some of our moral intuitions are things that we would no longer consider to be desirable. For example, we have a very strong innate tendency towards xenophobia, as this clearly gave us some survival advantages in the past. I think the evolutionary root of xenophobia is the strong distinction between us and them that was the price any community or group needed to pay for the internal trust and harmony required for the continued survival of the group in competition with other groups, thus maximising the reproductive success of the group members. However, I would argue that this is no longer a good moral intuition in today's world, as it tends to cause more overall misery. So, just because we have some moral intuition or other, I don't think it follows that this intuition is necessarily 'good'.

So, whilst I think we have little choice but to use our innate moral sense as a guide to behaviour, I believe that we can still improve upon this by applying reason to the problem. Furthermore, I believe that to state that some moral intuition or other that evolved in humans exists as some absolute moral rule independent of us and needs no further justification is fallacious.

What we ought to do: a rational approach

It seems clear to me that there exist no absolute moral principles - things that are not derived from more basic moral principles, and which stand in need of no justification. To put it another way, I don't believe that morality is built into the fabric of the universe. Whilst we can have empirical facts about physical aspects of the universe, I think that moral rules are necessarily judgements. Understanding how morality evolved through natural selection lends further support to this. Unlike the Deontological systems of ethics (e.g. religious, or Kantian ethics), I would contend that there is no ultimate moral yardstick. There are no moral rules that can be shown to be absolutely good – no Categorical Imperatives, as Kant would have it. We might attempt to define absolute moral rules, but by what method of reason can we justify them as such? For example, there is no fundamental property of the universe that makes killing wrong, and no universal rulebook that states it to be so (I ignore divine commands here, for reasons that I have stated in another blog post). I might attempt to justify making killing wrong by appealing to some other rule but, at each stage, I can ask what makes this other rule an absolute. We end up with an infinite series of justifications – never arriving at the one basic rule that is asbolutely right without justification.

Now, I am not making a case here for Moral Relativism, or for Nihilism. Moral relativism is a stance that is popular in some circles, but a little thought demonstrates how problematic a position it is to defend. According to moral relativists, there is no objective moral truth, only truths relative to social, cultural, historical, or person circumstances. However, the moral relativist now has to swallow some pretty unpalatable conclusions, or else explain why they cannot legitimately be deduced from the premises of moral relativism. For example: slavery was morally right in the American Deep South; killing Jews was morally right in Nazi Germany; the Stalinist purges were morally right in Communist Russia; the inclinations of certain US leaders to impose their views upon other countries by force is morally right for them; oppression of women in Iran, Saudi Arabia, and Afghanistan is morally right for them.

In fact, if we dig a little deeper here, we can expose some more problems with moral relativism. Firstly, how exactly do we determine the society, culture, or group that the morality is true for? For example, we might suppose that oppression of women is moral in Iran, but how about the women being oppressed – don’t they count in our calculations? In fact, culture and state is a transient and mutable thing – a set of traditions, religious and political ideologies, and individual, tribal and group power struggles. If a culture or state is oppressive, patriarchal, or tyrannical, there is no reason why its citizens should be forced to endure it. We are defining the correct morality for that society based upon what the powerful wish to impose upon the less powerful. It should be further borne in mind that nobody chose to be born into a particular culture and state. It was purely chance that governed where they were born, and they shouldn’t be condemned to a miserable life under some totalitarian regime because of this.

Following on from this, defining the correct morality in terms of the society or culture has the effect of rendering morally wrong anybody fighting for moral change within that society. So, according to this view, the (slavery) abolitionists were wrong since they were fighting against the accepted societal morality, and the suffragettes were wrong, as they were fighting against accepted societal morality etc. In fact, it would seem that only a non-relativist can be truly tolerant, since they can hold it as an objective property of any good morality that tolerance must be enshrined. By contrast, the relativist has little choice but to accept intolerance as moral for any society in which it is normative. A point usually lost on relativists is that if our morality is such that we wish to impose universal freedom and equality upon other societies, then that morality is true for us.

I think that these examples expose moral relativism for the absurdity that it is. As a system of ethics, I don't think that it gets off the ground. The moral relativist and I would concur that there is no absolute moral truth. However, I would go on to reject the idea that moral truth exists in a relativistic sense either. Nevertheless, in the absence of an absolute moral truth, I believe that, under almost any rational definition of morality, it is still possible to say that some moral systems are better than others. Hence, our task is to discriminate between the possible candidates, and try to determine what characteristics a moral system should possess in order to be a good one. Oppression, intrinsic inequality, and support for wholesale slaughter of the innocent are unlikely to be part of any such good system.

I am advocating more of a moral pragmatism. Once we accept that there is no absolutely correct morality, we can move on in our quest to build a good morality. This morality could then be applied universally. In practice, I think that the only way to make any progress on such a quest is to define some basic moral axioms, and then attempt to derive our morality from these. We should attempt to choose the most fundamental and universally agreeable moral axioms that we can, and then try to rationally derive a morality from them. In line with what I said earlier, I don't think that it is possible to come up with axioms that will all be absolute and in need of no further justification. However, I think that we have no choice but to start with some 'brute facts', as I think that nothing else is possible. Here are mine:

Axiom 1: all people desire to lead happy and flourishing lives (with a very few exceptions)
Axiom 2: all people count for one, and nobody counts for more than one
Goal: our morality should aim to give the most people the best chance of achieving the desire from axiom 1, but whilst not violating axiom 2

I cannot fully justify these, but this is to be expected since I don't think that there are any objective moral facts in the sense that they exist independently of human beings (as part of the fabric of the universe). Nevertheless, I think that axiom 1 is probably an empirical fact due to basic human biology and psychology. Pleasure, happiness, and pain are basic survival mechanisms; as the former two are associated with courses of action (eating, sex, thinking, cooperating with others etc.) that are conducive to my continuing to survive (and pass on my genes), and the latter is not. Hence, human beings evolved strong instricts and desires to encourage happiness (in its many forms), and avoid pain (in its many forms).

The second axiom just encapsulates a basic concept of equality amongst human beings i.e. everybody's interests should be considered equally when making decisions. Our goal is to give as many people as possible the best chance of realising this basic desire to lead a happy and flourishing life.

I don't think that this morality fits neatly into any of the standard categories i.e. objective, subjectivism, intersubjectivism. It is not objective in the usual sense, as there are no objective moral truths that exist in the universe independently of human beings. However, it is objective in the sense that it is objectively true that certain ways of acting are more or less likely to lead to our goal. Moreover, these types of objective moral facts exist independently of human opinion on the matter - whether the opinions are those of individuals (subjectivism) or of societies (intersubjectivism).

For example, I don't think that there exists an objective moral fact in the universe, independent of human beings, that indiscriminate torture and murder is bad. However, I do think it is objectively true that for any society to allow indiscriminate torture and murder would lead it away from the overall moral goal. This I believe as I think that our shared evolutionary history is such that there are certain core biological and psychological facts about human beings that we all share (with a very few possible exceptions).

Therefore, such moral facts as we can establish from my axioms are universally true - regardless of any apparent variation in individual or societal opinion on the matter. If some individual within a society, or the society in general thinks that indiscriminate torture and murder is a good moral rule to have, then they are simply wrong by the morality that I propose. This I assert, as having this moral rule would in fact lead inevitably to less overall human happiness and flourishing within that society. Hence, I reject subjectivism and intersubjectivism.

I think that we should be able to derive some of Kant's Categorical Imperatives from my axioms. For example, why should people not generally treat others purely as means to an end? Because to do so would result in other people withdrawing their goodwill and help, resulting in a lack of reciprocal altruism within the society. I would contend that this withdrawal of goodwill would result in a decline in the overall happiness and flourishing within the society - thus moving away from the overall moral goal. So, not treating others purely as means to an end is not a fundamental axiom, but I think that it can be derived as a good rule of thumb.

One might object to my first axiom by stating that there is a paradox in that to achieve the best results in terms of happiness you should not have it as your conscious goal. However, the paradox only applies if you pursue happiness directly. I am suggesting instead that we discover what tends to lead to more happiness, and do that. Happiness will then tend to follow. And this can be applied universally to maximise overall human happiness and flourishing.

Another objection might be that truth is a greater aim than happiness. However, whilst I do think that truth is generally to be considered a good, this is because its promotion tends to lead to more happiness. As such, I think that it is a derived value, and therefore less fundamental than the promotion of happiness.

Now that we have defined our starting point, we can return to our earlier conundrum, and derive the fact that killing is morally wrong as, in general, it does not maximise happiness and flourishing, and minimise suffering. The process of being killed will likely involve suffering, and the fear of being killed will likely cause great mental anguish. Furthermore, in the case of self-aware, sentient beings, we are thwarting their future life plans and desires and preference to continue living, depriving them of potential future pleasure, and causing mental pain and suffering to their friends and family. Much of this can be extended to include other sentient animals too, so their killing is also morally wrong within my scheme. Under such an ethic, the unnecessary killing of sentient animals for food, sport, or experimentation becomes analogous to doing so to mentally disabled humans who have a similar level of mental development. This is something that most of us would rightly find to be abhorrent.

In the case of non-sentient animals though, the rule against killing becomes harder to defend, other than on the basis of not causing them suffering. In this sense, I think that there is a sliding scale of wrongness when it comes to killing animals - based upon their level of sentience.

And why, exactly, should we live life in a way that attempts to maximise overall happiness? Because leading life in this way will likely increase our own chances of leading a happy and flourishing life. So, there is no need to have some universal lawgiver whose rules we must obey if we want eternal bliss rather than eternal suffering. Rather, we should live our lives in a way that encourages happiness in general, as it will promote our own happiness and flourishing.

Is Happiness Everything: a couple of thought experiments

One might object that maximising happiness might not always be the most moral thing to do. Firstly, as I stated earlier, I do not presuppose that maximising happiness is an absolute and transcendental rule, but rather a provisional axiom that I am using in order to derive a moral system. However, if we were to argue against using this axiom, we might take a number of routes.

We might, for example, argue that if we could make everyone permanently happy by injecting them with a happy-drug, would it be morally right to do so? The answer being sought is, of course, no.

However, this thought experiment only succeeds in refuting the idea of maximising a simplistic and caricatured version of happiness. We need to go back to the biological roots of happiness to see exactly why this is so. The thought experiment presents happiness only in terms of being a blissed-out zombie. However, in biological terms, many different things produce feelings of happiness, and these feelings come in many different flavours. For example, security, intellectual stimulation, appreciation of beauty and art, love, sex, eating and drinking, exercising, freedom from pain, exploring, achievement, and altruism can all produce different categories of happiness. The reason that there are many causes and varieties of happiness is due to the fact that, for a complex and social animal such as ourselves, many activities and courses of action increase our chances of survival, and hence of passing on our genes.

We’ve already seen the benefits of altruism to our survival. Getting happiness from eating and drinking, and from sex are pretty obviously beneficial to our chances of survival, as is being secure (i.e. out of the way of predators). That we get pleasure from such things as exploration and intellectual stimulation results from the fact that these things gave us a survival advantage, and were thus selected for by evolution. Some things are equally obviously a by-product of some other genetic predisposition. For example, appreciation of art does not, in itself, give us a survival advantage. However, it might be a by-product of us finding certain natural things more visually appealing than others (e.g. landscapes that offer us better survival prospects, potential mates who are healthy etc).

So, to maximise happiness is to maximise all of these varieties of happiness, and all others – not just to maximise some narrow form of physical pleasure. So, the thought experiment is fundamentally flawed. Imagine that we re-phrase the thought experiment to give somebody a drug that would maximise all of these forms of happiness. Some unhappiness would still be required, as this would push us to avoid things such as pain, starvation, and dehydration; and would also help to drive us to seek out a good place to live, a suitable mate etc. However, the drug would minimise these unhappy feelings as much as would be consistent with allowing the person to function otherwise unaltered in their daily life. In such a scenario, it is not at all clear why we would be morally wrong to administer this drug. In fact, I would predict that many people would clamour for such a drug – witness the situation with Prozac.

The only valid objection would be that of doing the drugging secretly. However, I think that, even then, a good case could be made for doing it. The whole idea of positive versus negative freedom is much debated. Nevertheless, in this case one could argue that, instead of having sinister totalitarian undertones, secretly giving somebody this drug would be analogous to secretly rigging a lottery to allow the person in question to win. Although we have deceived them, the outcome is the one that they would have chosen anyway – assuming that they were able to experience both situations and make a rational choice.

Of course, in totalitarian regimes, the rulers believe that they know what is best for the people, and impose it upon them. The big difference in my scenario is that the thing being imposed is so elemental and in accordance with our biological functions that it is rather like decreeing that people continue breathing – it is what any rational person (who does not wish to die) would choose to do anyway.

We might consider a reversal of the thought experiment, and consider a situation in which somebody has been given such a drug since birth, and has thus enjoyed a life of maximal happiness in the form that I have described. To them, that is the only life that they have known. Would we be morally right to permanently withdraw this drug, and commit them to a life with all of the unhappinesses that we know?

A second thought experiment that seeks to refute the axiom of maximising happiness is the case of a transplant patient. To quote Stephen Law:

“Suppose you’re the doctor in charge of six patients. The first has a minor medical condition easily cured. The others have failing organs and will soon die without transplants. No replacement organs are available. But then you discover that the first patient can provide perfect donor organs. So you can murder the first patient to save the rest. Or you can cure the first and watch five die. What is the right thing to do?

A simple utilitarian calculation suggests you should kill one patient to save the rest. After all, that will result in five happy patients and only one set of grieving relatives rather than one happy patient and five sets of grieving relatives. Yet the killing of one patient to save the rest strikes most of us very wrong indeed.What this case of brings out, it’s suggested, is that the right course of action is not always to maximize happiness. Indeed, it’s said that such cases demonstrate that human beings have certain fundamental rights, including a right to life, and that these rights ought not to be trampled, whatever the consequences for happiness.”

On the face of it, this does seem like a valid objection. However, a little bit of thought leads to the conclusion that to implement such a system would lead to a huge increase in general unhappiness, as healthy people live in fear of having their organs forcibly removed - a fate that would then often befall them in practice. So, it seems that the respecting of so-called basic human rights to life are essential in a society that attempts to maximise happiness overall, and the approach that I outlined is not falsified by this thought experiment.

Furthermore, as I mentioned earlier, more robust formulations of utilitarianism allow for such derived principles such as equality, right to life etc. to be taken as basic rules, unless there is a very good reason for breaking them. For example, it would be considered justifiable to kill a person who is about to explode a bomb that will kill many innocent people. However, in the case outlined in the thought experiment, the healthy person would not be killed.

Another apparent counter-example to utilitarianism is again courtesy of Stephen Law. It describes Nozick’s Experience Machine as follows:

“Here’s one last apparent counter-example to utilitarianism from the contemporary philosopher Robert Nozick. Suppose a machine is built that can replicate any experience. Plug yourself in and it will stimulate your brain in just the way it would be stimulated if you were, say, climbing mount Everest or walking on the Moon. The experiences this machine generates are indistinguishable from those you would get if you were experiencing the real thing.
For those of us that want to experience exotic and intense pleasures this machine offers a fantastic opportunity. Notice it can even induce higher pleasures - the pleasure gained from engaging in a philosophical debate or listening to a Beethoven symphony need be no less intense for being experienced within a virtual world. Many of us would be keen to try out this machine. But what of the offer permanently to immerse yourself in such pleasure-inducing world? Most of us would refuse. Someone who has climbed Everest in virtual reality has not really climbed Everest. And someone who has enjoyed a month-long affair with the computer-generated Lara Croft has not really made any sort of meaningful connection with another human being.
The truth is we don’t just want to “feel happy”. Most of us also want to lead lives that are authentic. Someone who (like Truman in The Truman Show) had unwittingly lived out their whole life within a carefully controlled environment might subjectively feel content and fulfilled. But were they to be told on their deathbed that it had all been a carefully staged illusion - that there had been no real relationships, that their “achievements” had all been carefully managed - then they might well feel that theirs was, after all, a life sadly wasted.

Again, it seems that what Layard calls “feeling good” is not, ultimately, what’s most important to most of us. Nor, it seems, is arranging things to maximize the feeling of happiness always morally the right thing to do. Secretly plugging everyone into a deceptive, Matrix-like pleasure-inducing virtual world would surely be very wrong indeed.”

This is an interesting thought experiment, but the objection to it is immediately apparent to me. The issue that people would have with living permanently in the experience machine is due to their failure to properly understand what the experiment entails, and the feelings of apprehension, fear, anxiety and so on that stem from this failure. In reality, all that we experience is through our brain. Everything that forms part of our experience of life – sight, sound, smell, touch, pain, pleasure, thought etc. is taking place within our brain. The rest of our body allows us to move about, and acts as a sensory input to our brain but, ultimately, everything that defines us and our life is happening in our brain.

Given this, the objection to the experience machine is not at all obvious any more. All we are doing is changing the source of the external input to our brain. Instead of coming from our body, it is coming from the experience machine. Furthermore, the experiences that we would have would be far more pleasurable than our ‘real life’.

One could further object that, whatever the source of the input to our brain, we would be swapping life in the real universe, for life in an artificial universe. However, given the fact that we cannot ever be sure that the universe ‘out there’ actually exists (we might already be a brain in a vat, or we might already be living in a simulation), perhaps it might be worth choosing to live a virtual but supremely pleasurable life?

I think that people imagine living permanently without their friends and loved ones, and other aspects of their life that they enjoy now, and decide that they would not want to do this. However, this is a failure to understand what the thought experiment entails. For, in order for the experience machine to provide for complete happiness, it would have to either duplicate our existing loved ones, or create new ones. Furhermore, any memory of our current lives would be wiped out, so we don't feel a longing for it. So, once in the machine, we could not be aware that we were living this virtual life, or it would cease to provide maximal happiness. Further, that we had lived our life in this machine would never be revealed to us, so we would have no feelings of having lived a fake life. This is a similar fallacy to that when people fear their own death (rather then the process of dying, which might indeed be unpleasant), because of the fact that they would never be with their loved ones again. Of course, in death, we exist no more, so we do not go on, but without our loved ones.

In such a scenario, it is not at all obvious to me that it would be rational to reject the chance of living in the experience machine. The thought experiment is just far too simplistic.
In summary, I think that the case against some robust form of utilitarianism has not been made. Hence, I still believe that I am justified in making the maximising of happiness and minimising of suffering my provisional axiom from which all morality is derived.

Conclusion


To summarise, I believe that there is no absolutely correct moral system per se. In particular, the moral systems advocated by the Abrahamic religions suffer from a number of fatal flaws that render them more bad than good. They have merely appropriated our innate evolutionary morality, combined it with ideas from earlier thinkers, and repackaged it as their own. In the process, strict moral rules and punishments for imaginary crimes such as blasphemy were added, and belief in certain things without evidence (i.e. faith) was elevated to a virtue.

The roots of our morality lie in our evolutionary history, and came about as a way of maximising the chances of us passing on our genes. There is no absolute moral yardstick, but we can still strive to create a good morality by deriving it from some basic, rational, axioms. I think that some form of Utilitarianism will likely be at the heart of any such moral system.

No comments: