In this rather technical post I will discuss secular morality with reference to the book Value and Virtue in a Godless Universe, by Erik J. Wielenberg. I will explain why I believe this book ultimately fails in its goal, and then discuss what I think morality actually is and ought to be.
Does Wielenberg answer Moore's Question?
G.E. Moore advanced his Open Question Argument in order to demonstrate the indefinability of the term ‘good’ as it is used within ethical theories (although the argument can also be applied outside of ethical theory). Many ethical philosophers have tried to prove some of their ethical claims by analysing the meaning of the word ‘good’. Moore, however, held that ‘good’ is an example of an ontologically simple thing that is incapable of definition in terms of any simpler concepts. Instead, he believed that any proposed definition of goodness will fail to fully capture its meaning. At the same time, he still believed that we intuitively recognise examples of ‘goodness’ when we see it, even though the concept itself is incapable of definition. He gave ‘pleasure’ and ‘yellow’ as other examples of such things .
In order to identify such cases, and in particular to demonstrate that goodness is one of these, Moore proposed that we ask an identity question of the type: “is it true that X is Y?” If this can be questioned by a conceptually competent person, then it is deemed an open question, or else it is a closed question. If the identity between X and Y forms an open question, then Moore supposed that the definition of X as Y fails to fully capture the meaning of X. In particular, if X is ‘good’ then Moore held that any Y (where Y is some set of natural properties) will fail to capture the full meaning of X, and hence any subsequent analysis will err. The type of argument that attempts to define a simple, non-natural, and indefinable property in terms of natural properties was supposed by Moore to be a formal fallacy -one that he termed the Naturalistic Fallacy .
In order to apply the Open Question Argument, we take any definition of ‘good’ – ‘good(ness) is X’ and see whether it makes sense to ask whether goodness really is X, and whether X really is good. For example, if we say ‘goodness is pleasure’, does it makes sense to ask, ‘is goodness really pleasure?’, and ‘is pleasure truly good?’ If it does indeed make sense to ask such questions of the proposed definition of good, then it is an open question in the sense that Moore intended. Moore held that any attempt to define ‘good’ in terms of natural properties will be an open question, as the definition in question will always fail to capture the full meaning. As such, according to Moore, any ethical theory that attempts to define what is good will commit the Naturalistic Fallacy. By contrast, take the statement: ‘a bachelor is an unmarried man’. In this case, it makes no sense to ask: ‘yes, but is a bachelor really an unmarried man?’ or ‘but is every unmarried man really a bachelor?’ The reason it doesn’t is that the full meaning of ‘bachelor’ is captured by ‘unmarried man.’ Therefore, in Moore’s terminology, this is a closed question.
In the book ‘Value and Virtue in a Godless Universe’ (hereafter referred to as VaV), Wielenberg doesn’t explicitly refer to, much less answer Moore’s question. Furthermore, I can find no implicit reference to Moore’s question. Although Wielenberg does mention Moore a couple of times, this is in relation to a discussion about intrinsic versus extrinsic values, rather than to his Open Question Argument.
Now, from what Wielenberg says in VaV - specifically in chapter 3 when he discusses the reasons to be moral – I believe that he subscribes to Kant’s metaphysical system of ethics. As Moore’s Open Question Argument is most commonly used as an attempted refutation of naturalistic ethical theories (such as Utilitarianism), Wielenberg may say (if asked) that his system of ethical beliefs is not vulnerable to the Open Question Argument, as Kantian ethics does not define moral facts in terms of natural properties. However, Moore’s discussion of the Naturalistic Fallacy does also cover metaphysical theories of ethics, such as Kant’s . According to Moore, if such ethical systems attempt to define the good, as Kantian ethics does (in terms of duty ), then they are committing the Naturalistic Fallacy too. The Naturalistic Fallacy should perhaps more correctly be called the definist fallacy, as it is really about mistaking the non-synonymous for the synonymous, and has nothing to do with the distinction between the natural and the non-natural per se (as this is normally understood).
According to Kant, in the first formulation of his Categorical Imperative, we should “Act only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction.” However, as mentioned, Moore specifically addresses this in his Principia Ethica, and decides that ‘this is good’ is not identical to ‘this is willed’, and hence Kant’s first Categorical Imperative is an open question . Now, whether Moore is right in his analysis here is a moot point (that I will address below). However, by failing to address this possible objection at all, Wielenberg has left himself open to the criticism that his thesis is fundamentally flawed.
We may profitably ask at this point if Moore’s Open Question Argument is a valid argument at all. In order to justify his argument, Moore’s line of reasoning might take the following syllogistic form:
[A]
A1: If a proposed definition for a word can be questioned by a conceptually competent person [i.e. one who understands the conceptual terms of the definition], then that definition fails to fully capture the word’s meaning [i.e. it will be an ‘open question’ in Moore’s terminology]
A2: All proposed definitions for a word that are not ontologically simple a priori ones [i.e. they are what Moore terms ‘complex’] can be questioned by a conceptually competent person
A3: Therefore, if a proposed definition for a word is not an ontologically simple a priori one, then that definition fails to fully capture the word’s meaning
From this, Moore might then argue:
[B]
B1: If a proposed definition for a word is not an ontologically simple a priori one, then that definition fails to fully capture the word’s meaning [i.e. A3 from above]
B2: Any definition of a word in terms of natural properties is not an ontologically simple a priori one
B3: Therefore, any definition of a word in terms of natural properties fails to fully capture the word’s meaning
And then with goodness in particular:
[C]
C1: If a putative ethical theory defines goodness in a way that fails to fully capture its meaning, then any subsequent analysis of goodness in that theory will err
C2: Any ethical theory that defines goodness in terms of natural properties fails to fully capture its meaning (from B3 above)
C3: Therefore, with any ethical theory that defines goodness in terms of natural properties , any subsequent analysis of goodness in that theory will err
From this conclusion C3, Moore would then conclude that Ethical Naturalism must be false.
Now, I believe that the arguments above fall at the very first hurdle, as I think that A1 is not self-evidently true, and is in fact demonstrably false. In order to falsify A1, all I need to do is to produce one counterexample where a proposed definition of a word can be questioned by a conceptually competent person, but still fully captures the word’s meaning. In that case, it would be an open question by Moore’s definition of it, but the word’s meaning would still be fully captured. I believe that ‘goodness’ falls into this category, in which case Moore’s Open Question Argument would fail, as a conceptually competent person could question the definition, but it would still fully capture the word’s meaning. I will come to that later, but will look at another example first. If A1 is false then argument A is unsound. If argument A is unsound, then B1 is not shown to be true, and argument B is therefore unsound. From that, it follows directly that argument C is also unsound, as C2 is not shown to be true.
In order to falsify A1 above, I will look at a particular example i.e. water. Now, scientific investigation of water has found it to be is a chemical substance that consists of molecules composed of two hydrogen atoms and one oxygen atom that are bonded together. This we denote symbolically as H2O . So, once we understand this concept, we can say that water is [the chemical substance that we denote symbolically as] H2O. However, we only know that this identity is true because we have acquired all of the relevant facts of the matter, and have analysed these facts in a cognitively accurate way. By such means we have determined a posteriori that water and H2O are synonymous. Nevertheless, this identity cannot be deduced from the concepts alone, as it requires us to know additional facts about the world, so it could be questioned by a conceptually competent person i.e. the question: ‘this is water, but is it true that it is H2O?’ is an open one according to Moore’s definition. So, water = H2O is a definition that can be questioned by a conceptually competent person (i.e. a person who is familiar with water, and who understands the symbolic concept of H2O), but that definition still fully captures the word’s meaning. Hence, A1 is falsified by this counterexample.
Now I will consider Moore’s argument in relation to morality. Moore argued that moral truths are intuitive, in the sense that they are supposed to be recognisable as being self-evidently true. In this regard, Moore suggested that ‘goodness’ is analogous to the quality of ‘yellowness’, which we can recognise and agree upon when we see it, but which he believed to be indefinable in terms of ‘natural’ properties. This raises some difficult questions as to exactly what this moral intuition or ‘sense’ is, how it works, and how we might adjudicate in any case of disagreements over the results that it produces. After all, we use our vision and brain in order to recognise the property of something being yellow, but by what analogous means could we recognise ‘goodness’ when we come across it? I will say a little bit more about this later.
By using this analogy, Moore is supposing that ‘yellow’ is something that we can intuitively recognise when we see it, but which we could not define in a way that would not be an open question – as he supposes to be the case with goodness too. Moreover, he rules out a complex definition of yellow (and of goodness) because he supposes that any such definition would be an open question, and would therefore fail to fully capture the word’s meaning. However, I have already shown this line of reasoning to be fallacious, as a definition being an open question does not entail that said definition fails to capture the full meaning of the word i.e. argument A above is unsound. In the case of yellow, if we were able to investigate a posteriori what it normatively means to us, then we would probably find that it is identical with the subjective experience of some set of human brain states (or some common property in the intersection of the properties of these states) that obtain when light within a range of wavelengths hits the eye of a person with normal visual function (as per the theories of optics , and the Identity Theory of Mind – which is a plausible, parsimonious and evidentially well-supported theory ). In any case, there probably exists some true definition of yellow in terms of natural properties of the universe (including our minds) that fully captures the meaning of the word.
So, if we knew all the relevant facts of the matter, and were cognitively accurate in the analysis of these facts, then we could probably provide the definition for yellow that Moore supposes that we can’t. It would still be an open question, as it could not be derived from the concepts alone, but I can see no good reason why a complex a posteriori one in terms of natural properties would not be valid in this case. Conceptually competent people might still disagree with this definition, but this would either be due to them having a mistaken or incomplete knowledge of the facts of the matter, or of having made logical errors in their reasoning based upon these facts (or both).
I believe that ‘good’ is analogous to these previous examples (and, ironically, Moore thought so too in the case of yellow), in that there exist definitions of it that, whilst open questions in Moore’s terminology, are true nonetheless. Moreover, I believe that we are probably in a position to establish at least one a posteriori complex definition, and others are possible too. This definition goes back to Moore’s original idea of some inscrutable moral intuition, so I will consider goodness only in terms of this intuition. My argument will take the following form:
P1: X = Y
P2: Y = Z
C: Therefore, X = Z [an example of a transitive relationship]
Specifically:
P1: Goodness is the property of some human act that we intuitively recognise as being self-evidently good [from Moore]
P2: The property of some human act that we intuitively recognise as being self-evidently good is the altruistic property of a particular set of evolved adaptive behaviours towards others
C: Therefore, goodness is the altruistic property of a particular set of evolved adaptive behaviours towards others
Evolutionary psychology and sociobiology have provided theories of how human morality evolved that are plausible, parsimonious, and have good explanatory power and scope. Furthermore, research has produced much evidential support for their explanations that our ‘moral intuition’ (altruism, compassion, empathy etc; as well as emotions to enforce this group morality, such as revenge, shame, and guilt) is a biological evolutionary adaptation that came about because acting in these ways was in the best evolutionary interests of our ancestors in their small social group environments. Although these behaviours may have had short-term disadvantages to the humans acting in these ways (e.g. giving up resources and time, and possibly putting themselves in danger for others), the advantage of others reciprocating gave a greater overall evolutionary advantage – which is why these traits were selected for. Moreover, precursors of this type of moral behaviour have been observed in apes – for example in the research of Frans de Waal - adding further weight to the idea that our intuitive morality is an evolutionary adaptation.
So, from the foregoing, if we are using morally ‘good’ in the sense of that which we intuitively recognise as being self-evidently so, in the way that Moore supposed (e.g. including such things as compassion, integrity, altruism etc.), then we can probably define it in a way that makes it a closed question i.e. something like: goodness is the altruistic property of a particular set of evolved behaviours towards others. Whilst the exact definition is moot, that there is some definition along these lines that forms a closed question is probably not. For, if the current theories of evolutionary psychology and sociobiology relating to the evolution of morality are largely true (as they probably are), then the property of an act that we intuitively recognise as good just is this altruistic property present in this set of evolved behaviours. Furthermore, the emotional urge to act in these ways, and the consequent emotional payoffs, are associated adaptations too – as they reinforce the behaviours. So, our moral intuition becomes fundamentally egoistic (but not usually in a conscious and calculating way), as we have an emotional urge to act in ways that would probably give us an evolutionary advantage if we were living in the small and primitive human groups of our ancestors.
There could be disagreement about a definition of goodness of this type, as it is not a simple a priori one. However, as with the previous examples, if we were in possession of all of the relevant facts of the matter, and were cognitively accurate in the analysis of these facts, then there would be no disagreement.
So, in conclusion, I think that Moore was correct in his assertion that any putative moral theory that analyses goodness should be able to define what it means by good in a way that makes it a closed question. However, I believe that Moore was wrong in his belief that only an ontologically simple a priori definition would be admissible in such a case. Furthermore, at least one such definition of goodness is possible if we make use of Moore’s idea that we possess an intuitive moral sense that allows us to recognise goodness when we see it. Note that other equally valid complex but closed-question definitions may also be possible if we approach morality from the perspective that it is normatively true that the fundamental human desire is for happiness (which was, non-coincidentally, historically largely coincident with being in a situation or performing an act that had some evolutionary advantage). In that case we can derive a definition for good as being, for example, ultimate human happiness.
Later I will derive a different but related definition of goodness that I believe is rationally justified, and not just based upon our moral intuition.
Does Wielenberg answer Hume's Question?
The so-called is-ought problem , as articulated by the philosopher David Hume in his ‘A Treatise on Human Nature’ (1739) highlights the logical error that people make if they attempt to deduce some moral ‘ought’ conclusion from factual ‘is’ propositions alone. For example, we could construct arguments such as the following:
P: Gay sex can never result in pregnancy
C: Therefore, one ought not to engage in gay sex [i.e. it is morally wrong]
Or,
P: Fox hunting causes physical and mental suffering to foxes
C: Therefore, one ought not hunt foxes [i.e. fox hunting is morally wrong]
Or,
P: Giving money to charity leads to an increase in overall human happiness
C: Therefore, one ought to give money to charity [i.e. giving money to charity is morally good]
In each of the above cases we are moving from a statement of something that ‘is’ the case to a conclusion about what ‘ought’ to be done or not done. However, in these and other similar cases the arguments are not logically valid - regardless of whether or not one accepts the truth of the propositions - as no evaluative conclusion can be deduced from purely factual premises. In each of the above cases, and in general, one would need to include a suitable evaluative premise. For example, in the first case we would need to revise the argument as follows:
P1: One ought not to engage in sex that can never result in pregnancy
P2: Gay sex can never result in pregnancy
C: Therefore, one ought not to engage in gay sex [i.e. it is morally wrong]
Now, in this revised version of the first argument even if P1 (or P2) is false the argument itself is at least formally valid and there is no longer an ‘is-ought gap’.
Now, in VaV, Wielenberg doesn’t address Hume’s question directly. However, if he was asked, I think that Wielenberg might respond by saying that he endorses Kantian ethics and that these are not vulnerable to Hume’s argument, as moral ‘ought’s’ are not deduced from factual ‘is’ propositions alone in this ethical system. Instead they are supposedly deduced by means of pure reason by means of Kant’s Categorical Imperative. More formally, based upon Kant’s Categorical Imperative we could say:
P1: If some maxim X cannot be universalized without resulting in a logical contradiction, then one ought not to act by maxim X
P 2: Maxim X cannot be universalized without resulting in a logical contradiction
C: Therefore, one ought not to act by maxim X [i.e. it is morally wrong, and in Kant’s words we have a ‘perfect duty’ not to act by it]
A particular example could be:
P1: If theft cannot be universalized without resulting in a logical contradiction, then one ought not to steal
P2: Theft cannot be universalized without resulting in a logical contradiction
C: Therefore, one ought not to steal [i.e. it is morally wrong, and in Kant’s words we have a ‘perfect duty’ not to steal]
I will not analyse here whether P1 and P2 in the above cases are actually true, but the argument itself is formally valid and doesn’t attempt to deduce some evaluative conclusion from factual premises alone, and so is indeed not vulnerable to Hume’s question. Wielenberg might have been wise to spell this out explicitly, but assuming that this would be his answer then I don’t think that it could justifiably be said to be a failing of his entire book.
Does Wielenberg provide any reason to be moral?
In chapter three of VaV, Wielenberg analyses and rejects three answers to the ‘why be moral?’ question that each attempt to show that morality and self-interest always or generally coincide i.e. William Lane Craig’s conception of divine justice, Aristotle’s theory of virtue ethics as described in his Nicomachean Ethics , and Hume’s alternative concept of virtue ethics (a traditional axiology as opposed to Aristotle’s revisionist axiology) as developed in his An Enquiry Concerning the Principles of Morals .
Wielenberg then goes on to propose an alternative reason to be moral – the idea that a given action is morally obligatory is itself a reason for performing that action, regardless of whether doing so is in one’s interest. This is Kant’s position, and is the one that Wielenberg himself endorses. However, Wielenberg fails to properly justify why certain actions are morally obligatory, or how we might decide which actions these are, so his argument as presented is little more than the unsubstantiated assertion that we should be moral because we are obligated to be so. His assertion immediately begs the question of exactly why we are so obligated. It is not self-evidently true that certain actions are morally obligatory (or how we might determine which actions these are), and Wielenberg provides no real justification for his claim. Therefore, I think that Wielenberg has failed in VaV to provide sufficient warrant to be moral, which I think is a fairly significant failing in a book that attempts to make a case for the existence of ethical truths in a Godless universe.
Kantian Ethics and its Flaws
Having said that, as Wielenberg states that he is endorsing a Kantian view of morality, he might reply that the justifications for the theory of morally obligatory actions can be found in Kant’s own work, and that any reader requiring such justifications should refer to that material. I think that this would be a weak argument, as such an important part of Wielenberg’s case for why we should be moral in a Godless universe should really have been demonstrated explicitly in his own book. However, in order to determine if Wielenberg’s case can in fact be justified by reference to Kant’s theories or not, I will now analyse what Kant had to say on the matter of moral obligations.
At the core of Kant’s moral theory are the three formulations of his so-called Categorical Imperative, from the Groundwork of the Metaphysic of Morals. These are as follows:
First formulation: “Act only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction”
Second formulation: “Act in such a way that you treat humanity, whether in your own person or in the person of any other, always at the same time as an end and never merely as a means to an end”
Third formulation: “Therefore, every rational being must so act as if he were through his maxim always a legislating member in the universal kingdom of ends.”
The first formulation is the most fundamental one, with the second and third formulations supposedly having been derived from the first (although there is some debate as to whether they can be so derived, or whether they are instead independent of the first formulation). Kant stressed that the three formulations of his imperative are not hypothetical (e.g. if you want some result X, then you ought to act in some way Y), but are an absolute, unconditional requirement - whatever the consequences of the action for ourselves or others (e.g. you ought to act in some way Y, regardless of the result). That is, they are categorical.
Kant believed that moral rules should be absolute and should apply to everyone equally, including to ourselves. In order to make moral rules inescapable in this way, he sought to make a case that rationality demands that we act morally; that acting immorally would equate to acting irrationally, and that morality would become as obligatory for us as is rationality. Kant believed that human beings are (in general) rational agents with moral autonomy. He further believed that rationality entails that when we, as rational agents, act in a certain way then we are implicitly saying that any other rational agent may act in the same way in similar circumstances i.e. we are legislating universally. He reasoned this, as he believed that we could not hold ourselves up as some kind of moral exception, as we are all similarly rational agents with moral autonomy, so rational consistency demands that any moral rule that applies to others should equally apply to us too in the same circumstances. Moreover, he held that any maxim that would result in a contradiction if universalized would be an immoral maxim. As such, Kant believed it would actually be irrational for us to act in a way that we would not want others to be able to do too, or that would become self-defeating if everyone so acted. So, for Kant, universalizability is actually an essential part of rationality. From this idea, he derived the first formulation of the Categorical Imperative. In Kant’s terminology, we have a perfect duty not to act by maxims that we would not want everyone to act by in similar circumstances, or that would result in logical contradictions if we attempt to universalise them.
In syllogistic form, Kant’s argument for the validity of the first formulation of his Categorical Imperative can be represented as follows:
P1: We have a duty to act rationally [as rational agents]
P2: Acting rationally entails that we act only according to maxims that are universalizable
C: Therefore, we have a duty to act only according to maxims that are universalizable [from which we can derive the first formulation of the Categorical Imperative]
I believe that this argument is unsound, with premise P2 not being self-evidently true, and in fact being probably false. Premise P1 might also be open to challenge, but I will not analyse that one any further here. I will now explain why I believe that premise P2 is false.
• Firstly, Kant seems to have redefined rationality in an idiosyncratic way in order to suit the requirements of his argument. Most people would not understand acting rationally in the way that Kant wants us to, and he is not at liberty to just redefine it at whim. The proper definition of a word is fixed by how that word is used is practice, and Kant’s definition does not satisfy this criterion. Acting rationally is typically taken to mean acting in a way that is in one’s best interests by maximising the chance of satisfying one’s preferences, desires, or goals (or, more formally, to maximise one’s personal utility ). In order to do this we should endeavour to obtain all of the true and relevant facts of the matter, and reason upon these facts in a logically valid way to deduce what will best satisfy one’s preferences etc. Of course, preferences, desires, and goals can themselves be irrational, as they might not be things that maximise our personal utility, since they themselves could have been based upon false knowledge or faulty reasoning, or might entail some negative consequences for us that outweigh the positive. For example, if one of my goals is to meet a friend for dinner on Tuesday evening, then it would be irrational for me to decide I know today is Tuesday because I picked that day of the week at random from a hat containing a strip of paper for each day - as that is not a reliable way of getting knowledge about the world. It would also be irrational for me to think that today is simultaneously Monday and Tuesday (assuming that I understand the meaning of the words ‘Monday’ and ‘Tuesday’), as this involves a logical contradiction (X and not X).
• Secondly, whilst rationality does entail acting in a logically consistent way in relevantly similar circumstances, it is not at all obvious that this type of logical consistency entails that I should only act by maxims that I would want everyone else to act by. Consistency in terms of logical thought and consistency in terms of how I would like to act and how I would like others to act (which might be called moral consistency) seem to be different types of consistency, with the former not necessarily entailing the latter. After all, I have much more interest in my own welfare than that of others, so why should there not rationally be an asymmetry between how I would like to act and how I would like others to act if this is in my best interests? To give an example, if I believe that I could fiddle my taxes without being found out, how does this rationally entail that I want others to be able to fiddle their taxes too? That me doing so entails that I think others should be able to do so too is not self-evidently true, and has not be demonstrated to be so, so it can’t just be asserted to be true. It might ultimately not be in my best interests to fiddle my taxes, but that is not what Kant is interested in, as he wants the rule to be absolute regardless of the consequences and interests for me or others.
• If we look at some specific hypothetical examples of what Kant’s definition of rationality entails, then we can get some very implausible results. For example, according to the typical definition of acting rationally it might be rational for me to tell a lie in certain circumstances if doing so would be in my best interests (by helping me satisfy some preference, desire, or goal etc). But according to Kant’s definition of rationality, it would actually be irrational for me to lie under any circumstances, as according to Kant lying cannot be universalized without contradiction . If I imagine a scenario in which I am a Jew attempting to evade capture and probable death by the Nazis during the Second World War, then Kant would say that it would actually be irrational for me to lie about my identity if stopped by a member of the SS, even if telling the truth would probably lead directly to my death. This is a highly implausible conclusion, and one that shows that something is probably wrong with the theory. We might attempt to rescue the theory by moving way from an absolute prohibition on acting by maxims that are not universalizable, and allowing exceptions where we would be happy to allow the same exceptions to others in similar circumstances, but this would be at the expense of taking consequences into account and losing the inescapability that Kant was seeking.
• We can end up with conflicting maxims depending upon an action is framed. For example, imagine that premise P2 entails that lying is deemed irrational, but that it also entails that not trying to protect my friends from harm is irrational. Now imagine that the Jew being sought by the Nazis is my friend. Do I truthfully reveal that he is hiding in my house (as lying is irrational), or protect him by lying about his whereabouts (as not doing so would be irrational)? As another example, if I am in a hurry to get to an appointment on time, and see a child drowning in a pool, should I rescue it or not? If my maxim is ‘help others in need’ then I should rescue the child. However, if my maxim is ‘arrive on time for appointments’ then I should not rescue the child if doing so will make me late. So, I can get contrary and conflicting results depending upon how I frame my maxims, and there is no obvious way to determine which way of framing is the ‘correct’ one.
So, in summary, I believe that premise P2 is an idiosyncratic definition of acting rationally that would not be understood by people in general. It also appears to rely upon unjustifiably equating different types of consistency; and it can lead to implausible, ambiguous or self-defeating results. So, I conclude that it is most probably false.
I think that Kant failed in his mission to make moral rules as inescapable as rational ones by attempting to make morality part of rationality by means of his concept of universalizability. Therefore, not only does Wielenberg fail to give sufficient warrant for being moral in VaV, but I believe that the foundation of the moral theory that he endorses (i.e. Kantian ethics) also fails to give sufficient warrant for being moral.
Why Should we be Moral?
In contrast to Wielenberg and Kant, I don’t believe that there exist absolute moral rules that it is our duty to follow regardless of the likely consequences for ourselves and others. I agree with Wielenberg that William Lane Craig’s idea that morality and self-interest coincide (over the long-term) because God sees to it that they coincide is false – primarily because I think belief in God’s existence is unwarranted (although there are also various other problems with Craig’s theory). However, I believe, for the reasons explained above, that Wielenberg’s solution to Karamazov’s Thesis is also false. In fact, I endorse a variety of the view that Wielenberg rejects – namely that morality and self-interest do (generally) coincide, and therefore we should be moral because it is generally in our best interests to be so. I think that there are two separate but related questions that are pertinent here: what is our moral intuition or sense; and how we ought morally to act? I think that it is instructive to have an answer to the former, as it will have implications for our answer to the latter.
In answer to the first of these questions, I believe that the relevant theories of evolutionary psychology and evolutionary biology provide the best explanations (in terms of plausibility, parsimony, and explanatory scope and power) for what our moral intuition is and why we have it. According to these theories, morality was an evolutionary adaptation that gave humans an advantage in small group societies, as those who acted in a reciprocally altruistic way, for example, were more likely to survive and pass on their genes since these acts were repaid in kind by others in the group (rudimentary forms of this sort morality are also visible in some primates). Accordingly, I believe that humans evolved reciprocal altruism, and the emotions of compassion, empathy, guilt, shame, and righteous anger etc. that help to enforce it as this gave them an evolutionary advantage. By considering human social interactions within small group societies as repeated iterations of the Prisoner’s Dilemma , biologists have provided good explanations of how and why reciprocal altruism (one of the core elements of what is normally considered to be morality) might have evolved . In general, Prisoner’s Dilemma type situations occur whenever people’s interests are affected not only by what they do but by what other people do too; and when everyone (including us) will end up worse off if they solely pursue their own individual interests. It is probable that such evolutionary considerations explain the emergence and ubiquity of the so-called Golden Rule in humans, and our moral intuitions in general.
As would be predicted by these evolutionary theories of our innate morality, the results of large-scale cross-cultural experiments where moral dilemmas are posed to people show that our intuitive moral feelings are remarkably universal and similar. Whilst it is the case that some particular manifestations of morality are culturally specific, these are usually due to some (often forgotten and archaic) environmental pressure (e.g. prohibitions on eating some foodstuff that might once have been toxic), or to some unwarranted beliefs about the world (e.g. religious ones).
Some philosophers argue that there are a number of reasons why the above explanations of our moral intuition are either false or incomplete. They argue that such explanations makes morality all about egoistic self-interest calculations, which is antithetical to what morality is all about; that it is overly reductive and fails to capture acts of compassion or kindness; that counter-examples show that we don’t always act from self-interest; or that they are nonsensical or abhorrent. However, I think that such philosophers make a number or errors in their reasoning.
Firstly, they confuse ultimate with proximate reasons for action. The ultimate reason that our intuitive moral sense urges us to act in certain ways is that this was to our evolutionary advantage. If we consider the Prisoner’s Dilemma again, a better result is obtained for all in a society (including us) if we cooperate so long as others do too, rather than everyone acting in a purely selfish way. This latter way of acting would lead to the Hobbesian dystopia described in his book Leviathan . The best result for us would of course be if others cooperated with us, but we did not reciprocate i.e. if we were free-riders . However, this situation is risky and unstable in practice, as our lack of reciprocity will in all probability be exposed sooner or later, at which point others will likely refuse to cooperate with us – leaving us in a worse position than we would have been in had we all cooperated. In fact, the best result in Prisoner’s Dilemma simulations comes from adopting a variety of tit-for-tat strategy , which is closely analogous to our general intuitive moral feelings of kindness and compassion towards others in our social group (much more towards those we know, and who are thus in a position to reciprocate), but with urges to punish and shun those who fail to reciprocate or act badly towards us.
The proximate reason for acting in ways that accord with this moral sense is that we feel compassion and altruism towards our fellow humans and want to help them, and so act accordingly. When we have such feelings, we are not (generally) consciously calculating what actions are in our best interests, and then acting accordingly. And even though acting in these ways often makes us feel happy and gratified, we are not generally doing so purely to elicit these feelings. It is a fallacy that acting in a way that is ultimately in our best interests entails that we are doing so out of purely selfish and conscious self-interest calculations, and are therefore not acting morally. If we were making such purely selfish calculations, then this probably wouldn’t accord with what is usually regarded as acting morally, but since that is not what I am describing then it is irrelevant. In other words, acting in one’s own interests is a necessary but not a sufficient condition for acting in a selfish and egotistical way.
Secondly, I think that they make an error analogous to that made by people who attempt to dismiss the identity theory of mind because they think that conscious experience just doesn’t seem at all the same thing to them as electrical and chemical states within the physical brain. Even if they are not dualists, these people still believe that there must be something ‘more’ involved than just brain states, as this is an overly reductive way of looking at things that fails to fully capture what they think consciousness is. However, just as I believe that the best explanation of consciousness is that it just is identical with brain states (but viewed from a first person instead of third person point of view), I believe that the best explanation of our intuitive moral sense is that it just is identical with certain evolutionary adaptations that urge us to act in certain ways (that we often call moral) that are ultimately in our best interests in a social setting (in general). Moreover, I think that denying this identity relationship might be an example of the masked man fallacy . I believe that apparent counter-examples, such as the urge to give aid to strangers who can never reciprocate, are just straw men. The urge towards altruism and compassion evolved in situations when our ancestors were living in small groups, so those that we acted altruistically towards would be in a position to reciprocate. Very recently in evolutionary terms the sphere of humanity that we can interact with has expanded globally, but our evolved moral intuition has not had time to catch up (assuming that there would be any evolutionary pressure to do so anyway), so we can easily feel a strong emotional urge to include people within our moral circle even if they would never be in a position to reciprocate our acts of altruism towards them.
One last type of objection that some philosophers make to the type of explanation that I gave for our intuitive morality is one based upon either gross misunderstandings of the theories involved, or of fallaciously conflating evolutionary psychology and sociobiology with aspects of social Darwinism and eugenics. Combining this with the belief that any egoistic theory of morality must be false by definition, they then advance unsound arguments that attempt to falsify this evolutionary view of morality. Well known examples of this occurred when the philosopher Mary Midgley argued against straw man versions of the theories in Richard Dawkins’ book The Selfish Gene ; and when Steven Rose, Leon Kamin and Richard Lewontin argued against E.O. Wilson’s book Sociobiology: The New Synthesis . I will not discuss this any further here, but Jeremy Stangroom gives a good overview .
If our intuitive moral sense is indeed an evolved one, then this exposes a further confusion amongst some philosophers of ethics, who often use these moral feelings as an arbiter of whether a result from some putative ethical theory or other (e.g. utilitarianism) is valid or not. Whilst a disagreement with our intuitive moral sense is probably worthy of further analysis, such a disagreement should not necessarily lead to a putative theory being ruled out, as all it shows is that the theory in question does not lead to the same result that would historically have been to our evolutionary advantage. Some philosophers are careful to note this type of fallacy in some circumstances – for example that the intuitive yuck factor that people feel when considering such things as incest should not necessarily make it morally wrong (e.g. if pregnancy can be ruled out through birth control etc.) However, in other cases they still refer back to intuitive moral feelings as being the ultimate judge of what is really right or wrong. I think that our intuitive moral sense should be taken as merely a sort of quick and dirty moral rule of thumb, and not the final arbiter of what we actually ought to do, as we are now capable of reasoning our way towards the best answer from the relevant and true facts of the matter – which may be a different answer to the one that our moral intuition would give.
On that note, I will move onto the second of my questions: how ought we to act morally? From my earlier discussion about Kantian ethics, I believe that it is rational for us to act in ways that are generally in our best interests (that maximise our personal utility). And from the discussion of the Prisoner’s Dilemma earlier, I believe that what is generally in our best interests is acting towards others in an altruistic and compassionate way (for example) - so long as they agree to abide by the same rules. Conversely, acting towards others in a selfish and unfeeling or a harmful way (for example) is generally not in our best interests, as people will tend to then act in the same way towards us (as well as there being legal consequences for some types of these actions). More formally, I would argue the following:
P1: We ought to act in ways that maximise our chances of achieving our primary desires and goals [as to do otherwise would be irrational and self-defeating]
P2: Our primary desire and goal is for happiness and flourishing [for evolutionary reasons]
C1: Therefore, we ought to act in ways that maximise our chances of achieving happiness and flourishing
P3: We maximise our chances of achieving happiness and flourishing by acting in certain ways towards others (‘moral’ ways e.g. altruistically and compassionately) and not in other ways (‘immoral’ ways e.g. selfish and harmful ways), as long as they agree to abide by the same behavioural rules [from Prisoner’s Dilemma considerations]
C2: Therefore, we ought to act in certain ways towards others (‘moral’ ways – e.g. altruistically and compassionately) and not in other ways (‘immoral’ ways e.g. selfish and harmful ways), as long as they agree to abide by the same behavioural rules [combining C1 and P3]
Premise P1 should be self-evidently true, as to act otherwise would be irrational and self-defeating. Anybody who would want to act in such irrational and self-defeating ways would probably be out of the scope of our moral considerations anyway.
Premise P2 is debated by philosophers, but is an empirical question. I would argue that our emotions have evolved as they have ultimately as they help to maximise our survival and reproductive success. Our level of happiness is closely correlated with us being in a physical and mental state, and a physical and social environment, that maximise the chances of us surviving and reproducing successfully with the best mates. As survival and reproduction are primary goals for all animals, including humans, and happiness is probably directly correlated with this, then achieving happiness is probably also a primary goal. In any case, sufficient evidence from biology and psychology should be able to confirm or refute this. Even informally, when we think about what we want and why we want it, eventually it always comes down to desiring to act in ways that make us happy (in a broad sense, not merely in terms of pleasure) - so the premise is very plausible, even before we take into account the empirical evidence from biology and psychology.
Premise P3 is also an empirical question. If our interactions with others in society can be effectively modelled by repeated iterations of the Prisoner’s Dilemma, as research strongly suggests, then the most successful strategies (in terms of maximising our personal utility) for us (and others) to adopt is that of acting altruistically towards others so long as they act the same way in return. As long as the majority of people act this way, then it will be in our interests to adopt this strategy too. Not only will it be in our interests as others will tend to reciprocate, but also our evolutionary development has provided emotional payoffs for acting in these ways – so we win twice. Although it might seem tempting to be a free-rider (i.e. taking advantage of the altruism of others but not acting that way towards them), such a strategy is inherently risky. We might believe that we will get away with such actions, but when others discover that we are acting this way (as they probably will eventually), then they will refuse to cooperate with us from that point onwards. From iterations of the Prisoner’s Dilemma, this is shown to not be a winning strategy. Similar considerations give the reason why we should tend to keep our promises, rather than just breaking them whenever it seems to be in our short-term interests to do so. Of course, in a society that agrees to abide by these rules, there would be legal penalties for certain types of bad behaviour (e.g. murder, theft etc.), which would make acting in those ways even more risky and self-defeating. In any case, sufficient evidence from the fields of sociology and game theory should be able to confirm or refute this, but in the meantime it is at least highly plausible.
So, in a nutshell, we ought to act in certain ways and not in other ways towards others (as long as they agree to abide by the same rules), as this will ultimately increase our chances of leading happy and flourishing lives. The ways that we ought to act towards people we call ‘moral’, and the ways that we should not act are called ‘immoral’, and these are open to empirical investigation. These are normative, as all people ought to act in these ways if they want to lead happy and flourishing lives, which all rational people should desire. This theory explains why we should be moral (i.e. it is ultimately in our best interests); when we need not treat other people morally (i.e. when they do not treat us morally); and what are the limits of morality (i.e. we need not rationally agree to act in a morally heroic way if others would be unlikely to reciprocate).
Some people will intuitively balk at this sort of egoistic theory of morality. However, as I have shown, this theory just builds upon our already existing innate moral sense. So, it gets the advantages of generally recommending ‘moral’ acts that we would be intuitively inclined to think of as such anyway (compassion, altruism, empathy etc.), and opposing those that we wouldn’t be (selfishness, lying, killing etc.), but then improves upon that sense by acquiring relevant and true facts of the matter and then reasoning accurately based upon these facts.
Back To Moore and Hume
Now, having given an outline of why I believe that we should be moral, I will return to Moore’s Open Question Argument and Hume’s is-ought gap.
Firstly, the Open Question Argument. Is moral goodness identical to acting in ways that maximise our chances of achieving happiness and flourishing? Now, from an a priori perspective these are not identical, as it can be questioned by a conceptually competent person. So, my definition would appear to fail the Open Question Argument. However, I believe that this definition of goodness is relevantly similar to the case of water being identical to H2O that I discussed in my answer to the first question earlier. From my syllogistic argument above, I believe that we can determine a posteriori that moral goodness and acting in ways that maximise our chances of achieving happiness and flourishing are synonymous. Nevertheless, this identity cannot be deduced from the concepts alone, as it requires us to know additional facts about the world, so it could be questioned by a conceptually competent person. So, I believe that my definition passes Moore’s argument (although he would have classed it as failing, as it is a complex definition, but I explained in question 1 why I think he would be mistaken).
Now, does my argument fail Hume’s is-ought gap? From my argument above, premises P2 and P3 are factual premises that are either objectively true, or factually false. In either case, they will be justified or refuted by reference to empirical data. However, I am not moving from purely factual premises to an evaluative conclusion, as premise P1 fulfils the evaluative premise requirement. Moreover, even though it is an evaluative premise, I would argue that premise P1is nevertheless normatively true, as it can only be denied at the risk of irrationality. Therefore, I believe that my definition of moral goodness does not fail Hume’s is-ought gap argument.
So, in summary, I believe that contrary to Wielenberg I have provided a definition of morality that passes both Moore’s Open Question Argument and Hume’s is-ought gap argument, and provides warranted reasons for us to be moral.
No comments:
Post a Comment