Sunday, March 16, 2008

Moral relativism

Moral relativism is a stance that is popular in some (particularly academic) circles, but a little thought demonstrates how problematic a position it is to defend. According to moral relativists, there is no objective moral truth, only truths relative to social, cultural, historical, or person circumstances. However, the moral relativist now has to swallow some pretty unpalatable conclusions, or else explain why they cannot legitimately be deduced from the premises of moral relativism. For example: slavery was morally right in the American Deep South; killing Jews was morally right in Nazi Germany; the Stalinist purges were morally right in Communist Russia; the inclinations of certain US leaders to impose their views upon other countries by force is morally right for them; oppression of women in Iran, Saudi Arabia, and Afghanistan is morally right for them.

In fact, if we dig a little deeper here, we can expose some more problems with moral relativism. Firstly, how exactly do we determine the society, culture, or group that the morality is true for? For example, we might suppose that oppression of women is moral in Iran, but how about the women being oppressed – don’t they count in our calculations? In fact, culture and state is a transient and mutable thing – a set of traditions, religious and political ideologies, and individual, tribal and group power struggles. If a culture or state is oppressive, patriarchal, or tyrannical, there is no reason why its citizens should be forced to endure it. We are defining the correct morality for that society based upon what the powerful wish to impose upon the less powerful. It should be further borne in mind that nobody chose to be born into a particular culture and state. It was purely chance that governed where they were born, and they shouldn’t be condemned to a miserable life under some totalitarian regime because of this.

Following on from this, defining the correct morality in terms of the society or culture has the effect of rendering morally wrong anybody fighting for moral change within that society. So, according to this view, the (slavery) abolitionists were wrong since they were fighting against the accepted societal morality, and the suffragettes were wrong, as they were fighting against accepted societal morality etc. In fact, it would seem that only a non-relativist can be truly tolerant, since they can hold it as an objective property of any good morality that tolerance must be enshrined. By contrast, the relativist has little choice but to accept intolerance as moral for any society in which it is normative. A point usually lost on relativists is that if our morality is such that we wish to impose universal freedom and equality upon other societies, then that morality is true for us.

I think that these examples expose moral relativism for the absurdity that it is. As a system of ethics, I don't think that it gets off the ground. The moral relativist and I would concur that there is no absolute moral truth. However, I would go on to reject the idea that moral truth exists in a relativistic sense either. Nevertheless, in the absence of an absolute moral truth, I believe that, under almost any rational definition of morality, it is still possible to say that some moral systems are better than others. Hence, our task is to discriminate between the possible candidates, and try to determine what characteristics a moral system should possess in order to be a good one. Oppression, intrinsic inequality, and support for wholesale slaughter of the innocent are unlikely to be part of any such good system, and should not be excused because they are present in some other society or culture.

In the past, Westerners have tried to impose upon others by force systems of morality that have left much to be desired (they have usually been religious moralities). This was wrong, but it doesn't mean that we are now wrong to judge bad moral systems in other societies.

The Divine Command Theory of Morality

I think that the divine command theory of morality suffers from a number of problems that, cumulatively, render it fatally flawed:

Does God exist?

A serious objection to the divine command theory of morality is that it assumes God’s existence. If God’s existence can be questioned, then the argument for a divine command theory of morality is undermined. In fact, I would contend that not only are we entitled to doubt the existence of God, but that we can go much further and argue that the case for the existence of God (with all of its very particular properties and desires), is very weak – although I won't elaborate upon that any further here.

If God doesn’t exist, then we have no mandate to follow the moral rules attributed to God for that reason alone. We might still decide to incorporate some of them into our moral framework, but they must stand or fall upon their own merits, not because they are supposed to be commands issued by God.


What makes God’s commands morally good?

If, for the sake of argument, we assume that God exists, then we are entitled to ask why the morality commanded by God is good. When considering this question, we might usefully refer to Plato’s famous Euthyphro dilemma: is what is moral commanded by God because it is moral, or is it moral because it is commanded by God?

In the former case, then we can just help ourselves to this morality without reference to God at all, as it makes God just the passer-on of a morality that exists independently of him. It also leads to a number of further questions that the theist needs to answer. For example, where did this morality come from in the first place, if not from God? Did it exist before the universe? If so, does it make any sense to conceive of some abstract or idealised concept of morality existing before the universe? If this morality is somehow independent of God, then how does God know that it is necessarily good, if he has no absolute moral yardstick against which to measure it?

Alternatively, the theist may take the second option. In this case, God could have commanded murder to be morally good, for example, and it would be so - since the command comes from God. In this case, the theist then has two further options. They may just bite the bullet and accept that murder would indeed have been morally good had God commanded it so. However, few are willing to subscribe to this view, as it contradicts our most basic concepts of what constitutes moral behaviour. On the other hand, the theist may contend that God would never have commanded murder to be a moral rule as God, being perfectly good, would only command what is good. However, this is just circular reasoning, and doesn’t deal with the dilemma, as we can rephrase it to deal with this objection i.e. is God good because to be good is just to be whatever God is, or is God good because God has all the properties of goodness? We are back in the same situation again with goodness either being arbitrary, or existing independently of God.


Why should we obey God?

Again, if for the sake of argument we assume God’s existence, then why should we obey God’s commands anyway? If God was to command us to torture and murder others, then would we be obligated to obey such commands, even though they go against our common moral sense?

Of course, the theist may contend that God would not command us to do such things, as God is perfectly good. However, there are at least two problems with this reasoning. Firstly, there are plenty of examples in the Bible of moral atrocities supposedly being commanded by God, so the precedent clearly exists. Furthermore, some of the moral rules attributed to God would seem according to our common moral standards to be cruel and disproportionate in that they specify a penalty of death for such supposed crimes as blasphemy, picking up sticks on the Sabbath, being a witch, and talking back to one’s parents.

Of course, some Christians might object that the New Testament gives a far kinder message, as preached by Jesus, and describes the new covenant between God and man. However, the New Testament still contains various odious rules and strictures, a command to still obey the rules of the Old Testament, as well as the introduction of the concept of eternal punishment for a variety of supposed sins. Moreover, in some Christian worldviews, the avoidance of hell comes by accepting Jesus as one's saviour - which therefore bars entry to heaven to all those who existed before Jesus lived, who don't hear his message for any other reason, or who have heard his message but have chosen to reject it in favour of some other belief system or of none. Moreover, the whole concept of the New Testament preaching a kinder message implies that God's personality or teachings have changed from those described in the Old Testament, or that he has changed his mind. But surely this cannot be so, as this would imply some sort of moral development or improvement on God's part, but he is by definition omniscient, omnipotent, and morally perfect. As such, how can he improve, and for what reason could he ever change his mind, since he already knows all there is to know?

Secondly, how can we ever know that something is good just because it is commanded by God, since we can never know that God is perfectly good? We cannot take God’s word for this, as it merely begs the question. And just defining God as perfectly good doesn’t settle the matter either, as I would contend that God’s existence and characteristics are synthetic propositions, not analytic ones, as we are dealing with aspects of the real world rather than just formal logic.

Furthermore, if somebody alleges that God has given them some moral command or other, or detailed a list of commands that we should all follow, then this too is highly problematic. How are we to know that this person is not deluded or lying? Moreover, assuming that the person in question truly believes that God has given them this moral command, then how can they themselves ever know that they were not just imagining it? Furthermore, even if they did actually receive some genuine communication, how can they or we ever know that this communication emanated from God, and not from some other entity – supernatural or otherwise?

For the foregoing reasons, I don’t think that anyone is entitled to evade moral responsibility for their actions by arguing that they are merely following a command from God, even if this were actually true.


What are God’s commands?

The moral and ethical guidelines in the Bible are often contradictory, and are open to multiple interpretations. So, how are we to determine exactly what morality is espoused therein? In light of this, I think that we might also ask why such supposedly important messages for humanity were not communicated in a clear and unambiguous fashion - one that is not open to multiple and often conflicting interpretations?

Of the moral guidelines that are more clearly expressed in the Bible, some deal with matters that would seem to be unimportant or neutral when it comes to a moral life. For example, rules to not eat shellfish, and to not wear clothes of mixed fibres.

Theists might attempt to gloss over the multiple inconsistencies in God’s moral law, as detailed in the Bible, and interpret it in order to render it more agreeable to our current moral sensibilities. However, if we decide to ignore or interpret certain of God’s moral rules (killing blasphemers, Sabbath breakers, witches, disobedient children; not eating shellfish, and not wearing mixed-fibre clothes etc.), but to follow other rules, then we have either made an arbitrary choice or else, more probably, we are making our choice based upon some other moral yardstick that is independent of the Bible. If it is the former, then it would seem to be a rather unreliable way to build a moral framework. If the latter case, then we might as well reason our way to this independent morality without reference to the Bible at all.


Are God’s rules necessary for a moral society?

One might argue that, even if we doubt God’s existence, society should still adhere to religious morality, as it will lead to greater moral health. However, it is rather conspicuous that many societies have or have had ethical systems that are not based upon some divine command theory, without them having any consequent morally bankruptcy. For example, the Ancient Greeks had a very well developed secular ethical system (they condoned slavery, but then so did Christian societies). Even though the Greeks had their own gods, these Gods apparently did not concern themselves with dictating moral laws.

Further, there is very good empirical data that shows a strong positive correlation between a society’s level of religiosity and the prevalence of all sorts of ills – crime, illiteracy, mortality rates etc. For example, many Western European societies are amongst the least religious in the World, but they can be seen to have a good moral health by almost all of the relevant markers. This doesn't necessarily imply a causal relationship, but it does undermine the theory that a society’s moral health is dependent upon it being religious.

Some theists argue that modern Western society is in moral decline, and correlate this with an increasing lack of religiosity. However, this is a fallacious argument. By almost any marker, Western societies are more moral than they ever were when they were more religious. They are freer, more equal, less violent, more compassionate, and so on. What the theists are doing when they make this type of argument is to confuse such societal tendencies as greater consumerism (which I would argue can be morally negative, neutral, or positive, depending upon the specifics), with a moral decay.

The theist may also assert that atheists or secular societies that they would consider to actually be moral are only so because they are living off the moral capital built up by a previous religious tradition. However, this argument seems to be an entirely ad hoc one, with no good supporting evidence ever being produced.


Who is the more moral?

We might reasonably ask who the more moral person is. The one who acts morally out of a selfless desire to treat others decently and compassionately, or the theist who is doing it in blind adherence to divine laws, or for prudential reasons, in order to gain a reward, and avoid a punishment from God?


Where does morality come from?

So, if morality doesn’t come from God, then where does it come from? I think that we have an innate sense of morality, and that this is derived from our evolutionary heritage. Our cultures and intellects have given us additional moral drivers, and modified existing ones, but I think that the roots of our morality are evolutionary. When our primitive ancestors started to form groups, their chances of survival were increased by acting in certain cooperative ways, and decreased by acting in others. Clearly, those who acted in those ways conducive to survival were more likely have offspring and to pass on their genes. Hence, tendencies towards such things as reciprocal altruism were selected for by evolution.

For example, the so-called Golden Rule clearly has a survival advantage for each member in a group if all group members adhere to it. In fact, it has been demonstrated in game theory that the principle that gives the best result for individual group members is one in which a member will initially cooperate with another member, but will henceforth copy the last action of the other member in a tit-for-tat fashion – either cooperating or not. The theory behind this is known as the Prisoner’s Dilemma. Lo and behold, examples of this type of behaviour are indeed witnessed in nature, and can quite clearly be seen to be precursors to our own morality.

That our innate morality is based upon our evolutionary heritage doesn’t mean that we shouldn’t seek to improve upon it. Science helps to explain what is the case, but not what ought to be the case. For this, we need to apply the methods of reason.


Can we have a secular morality?

If we are to dispense with religious morality, then what can we put in its place? Three major types of secular morality have been devised: Virtue Ethics, Kantian Ethics, and Utilitarianism. Each takes a different approach - emphasising virtue and flourishing, duty, and consequences of actions respectively, and each has its own strengths and weaknesses. Perhaps none is sufficient to take as a complete moral framework, so we may need to attempt to combine the best aspects of each in a way that doesn’t lead to contradictions and internal inconsistencies.

One big advantage that all of these secular moral systems have over the divine command theory is that none inherently entails a dogmatic and blind adherence to its moral strictures. Each offers a set of moral objectives or meta-rules but, as they were arrived at by means of evidence, reason, and reflection, they are theoretically open to revision and improvement. By contrast, divine commands are intrinsically dogmatic, since they were supposedly handed down by God, and they are thus strongly resistant to change. The result of this is that such moralities still incorporate many ancient rules that reasoned analysis would now class as either morally neutral (e.g. eating shellfish), or as unenlightened and bad (such as enshrining the inequality of women).

Furthermore, in light of the previously discussed flaws with the divine command theory of morality, I think that stipulating death for victimless crimes like the breaking of trivial ad hoc rules or for professing unbelief in God is in itself immoral. So, I would argue that for these reasons alone any of the secular moralities mentioned above is better than the divine command theory.

Conclusion

I would argue that some of the supposedly divine moral commands are merely parasitic upon our innate sense of morality, whilst others are at best irrelevant to a good morality and, at worst, in direct opposition to it.

I would further argue that the case for a divine command theory of morality is rendered fatally flawed because God’s existence is in clear doubt, because any such system of morality would either be arbitrary or independent of any God, because we should not just blindly follows such rules, and because the rules are ambiguous and inconsistent anyway.

I don’t think that any absolute and true divine system of morality exists (or any other absolute moral system, for that matter), so we don’t have the misplaced certainty of the theist. Instead, we need to build upon our innate sense of morality by devising a secular morality through the investigation of evidence and the application of reason. This task will not be easy, and much more work needs to be done, but I think that this is the best way forward in the quest for a good morality, as morality does not depend upon God.

Friday, March 07, 2008

Induction problem and God

We use inductive reasoning when we reason from a few examples to a generalisation. For example:

· We have observed 1000 swans to be white
· Therefore, all swans are probably white

Or

· The sun has risen every day throughout recorded human history
· Therefore, it will probably rise tomorrow morning.

However, there is a fatal flaw in this type of reasoning, as the philosopher David Hume pointed out in the 18th century. We can never know for sure that a conclusion reached by inductive reasoning is true. In the first of my examples, we would have to observe every swan in the world in order to be completely sure of the validity of the conclusion. In fact, black swans do exist, so the conclusion is actually false. In the second and more interesting case, we can only draw this conclusion by insisting that the laws of nature will remain the same tomorrow as they have in the past (I am here ignoring the possibility that the sun will be destroyed by some event consistent with current natural laws). But, how can we know this? The natural answer is that they have always been the same in our experience, so it makes sense that they should be the same tomorrow as well. However, this in itself is an inductive argument – reasoning from observations in the past to those in the future. We can’t justify induction by recourse to induction can we, as this is just circular reasoning?

So, we seem to have a predicament. According to Hume’s problem with induction, we have no rational reason to believe that the sun will rise tomorrow just because it has always done so in the past. This is not making the weak claim that the sun will probably rise tomorrow, but there is a small probability that the laws of nature might suddenly change so that it will not. Rather, Hume is saying that we cannot conclude that it will rise tomorrow based upon previous observations of it rising - as there is no reason at all to suppose that the laws of nature will not suddenly change. This is a particularly acute problem for science in general, as it relies heavily on inductive reasoning to generalise from a few observations to theories and laws of nature.

How might we attempt to resolve this problem? A.J. Ayer held (in his book Language, Truth, and Logic) that the problem of induction was actually a fictitious one, as there is no solution, and all genuine problems are at least theoretically capable of being solved. According to him, if we take it to be a tautology, then we cannot deduce from this matters of empirical fact. On the other hand, if one approaches it empirically, then one assumes what one is setting out to prove.

It seems to me that we have no entirely satisfactory solution to this problem, but we may attempt solutions along the following lines. Firstly, we might say that we have no choice but to take induction as an epistemic practice that is in need of no further justification. A slightly more satisfying solution comes from utilising the ideas of reliability that I discussed in the previous question. We might say that we are justified in using induction because induction is a reliable way of forming true beliefs, in the same way that my perception is a reliable way of my forming true beliefs. Of course, this then becomes a circular argument, as its reliability up until now doesn’t guarantee its reliability in the future, unless we assume inductive reasoning is valid.

Karl Popper attempted a solution based upon his idea of falsification. He proposed that science does not in fact evolve by means of inductive reasoning but, rather, by means of the falsification of theories. Popper held that, to be considered properly scientific, a theory must be capable of being falsified. Under this scenario, we can avoid inductive reasoning by relying instead upon modus tollens (e.g. if all swans are white then we will not find any black swans; we have found black swans; therefore, not all swans are white etc). Critics would say that science does not in fact evolve this way, relying instead upon inductive reasoning.

Another possible solution is by recourse to Occam’s razor. We might reason that the hypothesis that the laws of nature will change in some way such that they invalidate our previous conclusions based upon inductive reasoning is a less parsimonious hypothesis than the one that they will just remain the same, as it involves the introduction of additional ad-hoc elements. This approach also seems to be applicable to Nelson Goodman’s idea of grue, in which something is grue if it is observed to be green before time t, and blue thereafter. This seems less parsimonious than the idea that it would just remain green – not because we are defining grue in terms of green and blue, which seems intuitively less simple, but because we are specifying some arbitrary change in one of its properties at time t, as opposed to no change at all.

In the end, perhaps the least unsatisfactory solution is a pragmatic one. Even though we perhaps lack a complete justification for inductive reasoning, it is still (at least indirectly) rational for us to continue to use it, as we want to form true beliefs about the world and, of any method of inference, induction is the one that will maximise the number of true beliefs about the world that we will obtain and minimise the number of false ones.

An interesting question I saw raised is whether this has any implication for arguments to disprove the existence of God. If they rely upon inductive reasoning, are they equally vulnerable to the problem of induction?

Well, perhaps, but fortunately many of the arguments against God’s existence rely upon deductive reasoning, which is entirely different. These are arguments of the form: all dogs are mammals, Rover is a dog, therefore Rover is a mammal etc.

In deductive reasoning, there is a distinction between a valid deductive argument, and a sound one. A valid deductive argument is one that is correctly formed so that the conclusion follows inevitably from the premises. There is no necessity for the premises to actually be true, merely for the conclusion to follow logically from them. For example:

Premise 1: all dogs are three-legged animals
Premise 2: all three-legged animals are from Mars
Conclusion: therefore, all dogs are from Mars

This is a valid deductive argument, despite the fact that both premises (and the conclusion) are clearly false.

By contrast, a sound deductive argument is one in which we have the additional constraint that the premises are true. For example:

Premise 1: all dogs are mammals
Premise 2: no mammals are animals with scales
Conclusion: therefore, no dogs are animals with scales

How about deductive arguments to disprove God’s existence? Can I formulate one that is sound? Theists in particular might say that the premises in any such argument are false, as we don't have a clear-cut situation as in my examples above. However, in the end it comes down to plausibility and degree of reasonableness. Can I construct valid deductive arguments that disprove God's existence, and which are plausible? Well, for example, I would say that this argument is valid and plausible, but not necessarily sound:

Premise 1: if the Christian God exists, with the usual properties of omnipotence, omnibenevolence, and a particular interest in human beings, then he would not allow any more than the absolute minimum amount of evil possible (natural and man-made) to exist in the world (this follows from God's properties)
Premise 2: the amount of evil that exists in the world far exceeds this absolute minimum (from the empirical evidence)
Conclusion: therefore, the Christian God doesn't exist

Here is another one:

Premise One: If a merciful and compassionate God wants us all to be saved (as many Christians believe), then he would provide clear and unambiguous information about his message to all humans, as this is necessary for salvation (by definition).
Premise Two: This clear and unambiguous information is not provided to all humans (from the empirical evidence).
Conclusion: Therefore, such a God does not exist.

These are valid deductive arguments, and I would further contend that they are plausible ones too. Of course, whether they could be classified as sound depends very much upon one's worldview. This is a consequence of the fact that we know of no knockdown argument to disprove God's existence (but the same applies equally with the arguments for God's existence). It all turns upon degree of plausibility and reasonableness, and I find the valid arguments that seek to disprove God's existence to be far more plausible and reasonable than those that seek to prove it. The weight of plausible arguments against God's existence also forms a cumulative case.
Now, one may take issue with either or both of the premises in my arguments above, but they are nonetheless valid deductive arguments. That is, if we accept the premises, then the conclusions must be true. Moreover, any argument that attempts to disprove the existence of God by deducing what predictions are entailed by the God hypothesis, and then finding these predictions unfulfilled, has proceeded by falsification. It is therefore a deductive argument that doesn't suffer from the induction problem.

Reductionism

I think that one of the problems many people have with the attempt to explain the world using science is the issue of reductionism. Many see reductionism as an attempt to take all of the emotion, beauty, and mystery out of the world, and replace it by cold, mechanical, and simplistic scientific explanations.

To answer this, I would like to draw a distinction between useful or good reductionism, and useless or bad reductionism. For the scientific endeavour, reductionism is essential, as the natural world is far too complex for us to fathom otherwise. We need to attempt to reduce complex systems of interactions to simpler or more fundamental things, create hypotheses based upon these simpler explanations, deduce and test the predictions of these hypotheses etc. Without reductionism, science would be too difficult for us to do. So, where reduction allows us to make progress in the scientific endeavour, I would classify it as good reductionism.

By contrast, we might try to understand the causes of Islamic extremism by reducing everything to the interactions of fundamental particles. This would clearly be a hopeless task, and would be highly counter-productive. Whilst I think that all macro phenomena probably can be reduced to the micro (notwithstanding the mooted issues with emergentism ), there are many cases when this makes no sense at all, as it adds nothing to our understanding, and makes the task more difficult instead of less. When analyzing some aspects of the world, such as the societal, religious, and ideological drivers that lead to extremism, we do much better to stick to macro explanations.

So, we have different levels of explanation that are appropriate in different circumstances. Having said that, I think that we are often not in an either/or situation with regard to reductionism. For example, when looking at the human emotion of love, I think that the reductionist approach (by looking at evolutionary and biological aspects of human emotion), and the non-reductionist approach (experiencing love, reading romantic novels, poems, sonnets etc.), actually complement each other. That is, I think that the greatest understanding of love comes from looking at all the useful levels of explanation, and not just concentrating on one. Generalizing this, I think that we need to know when reductionism is useful in understanding the world, and when it is not, and seek enlightenment accordingly.

Art appreciation and evolution

The pre-requisites for evolution are reproduction, mutation, and competition for limited resources. Most mutations will be deadly, and will therefore not be passed on to offspring. However, occasionally a mutation will occur that will confers some advantage upon the organism in question. This advantage will make the organism better able to compete for resources (food, mates etc.), and therefore give it a better chance of producing offspring.

In evolutionary terms, advantage is often conferred by a greater awareness of one's environment through sensory apparatus, and an increased ability to reason about and act upon this sensory data. Even a slight improvement in these areas will likely give some survival advantage to an organism, and increase the chance that this will be passed on. After millions of years of trial and error, and incremental increases in the brain's ability to interpret and reason about sensory inputs, evolutionary pressures finally led to a complex brain with a feedback loop - i.e. it became self-aware. This seems to have gone hand-in-hand with the development of language, and the two developments may have fed off each other. In fact, the development of language may be an essential component in the development of consciousness.

What is certain is that this development of consciousness and language gave us a huge evolutionary advantage - which is how it evolved in the first place. We now have far greater flexibility to move beyond our genetic programming, to reason and reflect, and to communicate our thoughts and ideas to others. We can therefore work together on problems, and build upon the thoughts and ideas of our ancestors and contemporaries. This has led to an exponential increase in our knowledge and understanding of the world.

But, how is all of this relevant to our appreciation of art? Well, I would suggest that the appreciation of beauty did in fact evolve in animals as it conferred them with an advantage. Furthermore, it seems to me that the precursors of this are still likely to be found in lower animals today. For example, I would speculate that there is a sense in which a bee finds a flower to be 'beautiful'; female birds find the songs, displays, or nests of the best males to be beautiful; and giraffes find a verdant landscape with lots of edible foliage to be beautiful. These responses exist as they attract animals to the best environments and food for survival, and to the most promising mates. The same types of responses exist in us too.

Of course, our greater intelligence, language, and social nature led us to develop cultural artefacts and diversity that go way beyond anything found in other animals. As part of this, we have developed many different ways in which to stimulate and extend our innate appreciation for beauty. I believe that art and music are two of the offshoots of this mixture our evolutionary and cultural heritage. As with many aspects of our current behaviour, I think that the roots are most likely to be evolutionary, but that our cultural development has built upon these roots in ways that sometimes make the original evolutionary drivers far from obvious.

Of course, art (including theatre, film, music, painting etc.) includes far more than just beauty. Nevertheless, I think that most works of art aim to stimulate or provoke some emotion or other in those viewing it, and those emotions arose in the first place as they gave an evolutionary advantage.

For example, I think it’s fairly obvious that anger would give animals an advantage when fighting over a mate or territory; fear would give them an advantage when confronted with danger; passion draws them to a mate, and love helps to keep the pair together for the benefit of the offspring; inquisitiveness and adventurousness would help them look for new and better places to live, and ways of doing things; feelings of friendship, empathy, altruism, camaraderie etc. are good survival traits for group animals such as ourselves, and so on.

We clearly get enjoyment and stimulation from suspending disbelief and invoking these emotions through artificial means, as evolution has honed us to respond strongly to emotional triggers. Furthermore, in more recent times I think that there would likely have been other survival advantages accrued from passing on important information through stories, songs, paintings etc.

I don't think that a bit of art appreciation that gives an adaptive advantage. Rather, I think that art appreciation is a side-effect of our emotions in general, and our attraction to 'beauty' in particular (and the full gamut of art in general is a cultural outgrowth of this). It is the possession of these emotions that give an incremental evolutionary advantage, and is the reason that we possess them today.

Here is an interesting article on this subject:

http://steamthing.com/artistic1.html

Is Astrology a Science?

How might we go about deciding what constitutes a science and what doesn’t? The philosopher of science Karl Popper wrote the following about how to demarcate science from non-science (including pseudoscience):

[T]here is another special kind of boldness – the boldness of predicting aspects of the world of appearance which so far have been overlooked but which it must possess if the conjectured reality is (more or less) right, if the explanatory hypotheses are (approximately) true. It is this special kind of boldness that I have in mind when I speak of bold scientific conjectures. It is the boldness of a conjecture which takes a real risk – the risk of being tested, and refuted; the risk of clashing with reality.

Thus my proposal was, and is, that it is this second boldness, together with the readiness to look out for tests and refutations, which distinguishes ‘empirical’ science from non-science, and especially from pre-scientific myths and metaphysics.

I will call this proposal (D): (D) for ‘demarcation’. (Karl Popper ‘The Problem of Demarcation’)

At the heart of Popper's proposal is the concept of falsifiability (he also saw this as the way around the problem of induction). He thought that the demarcation between scientific and unscientific theories should be based upon whether the theory in question is falsifiable or not - with falsifiable theories being considered scientific, and non-falsifiable theories unscientific. Popper's proposal excludes from the domain of science not unfalsifiable statements, but whole theories that contain no falsifiable statements.

Moreover, to be properly falsifiable, a theory should make clear, unambiguous, and bold statements that can be compared empirically against reality. If the theory’s predictions are vague and equivocal, then it will be difficult to falsify, since it is not clear what would constitute a failed prediction. Equally, if a theory merely predicts things that we already know to be true, then we have no good reason to favour this theory over any other that merely predicts the same observations.

For example, Einstein’s Theory of General Relativity predicts that the Earth will orbit the sun in an elliptical orbit. But this was already predicted by Newton’s theory of gravitation, so we have no reason to prefer Einstein’s theory over Newton’s existing one. However, Einstein’s theory also predicted that massive objects bend light – something that Newton’s theory does not predict, and which had not previously been observed. This bending of light was duly observed during an eclipse in 1919, and the fact that relativity’s bold prediction was not falsified counted strongly in its favour.

It should come as little surprise that physics, for example, would be classified as a science according to Popper’s proposal – since Popper no doubt formulated it with this in mind. Theories in physics generally make plenty of clear, unambiguous, and bold statements that are open to falsification. Whilst attempts would be made to rescue theories in the light of anomalous observations, these attempts would often make the theory even more falsifiable, not less. Theories that consistently make false predictions would be rejected in favour of better theories that don’t.

Now let’s have a look at what Popper’s proposal would have to say about astrology. Would it qualify as a science? Popper himself gave this as an example of something that clearly failed his falsifiability criterion. There are two problems with falsification in astrology. Firstly, the predictions that it makes are often sufficiently vague or ambiguous that it is very difficult to establish what would constitute a failed prediction. For example, tests have been carried out in which professional astrologers have been asked to cast a horoscope for a specific person. This reading has then been given to a large and disparate group of other people, purporting to be a reading done for them specifically. In almost all cases, the subjects who were given the horoscope rated it as highly accurate for them. This is because the types of statements made in such horoscopes are sufficiently ambiguous and open to interpretation that they can be made to fit many diverse people.

Secondly, if we do somehow manage to pin down astrological predictions sufficiently unambiguously to carry out a rigorous empirical test in an attempt to falsify it, then failed predictions don’t seem to be accepted as such by the astrologers in general. It might be reasonable to ignore or explain away a certain amount of anomalous data, as science does, but the astrological community seems prepared to accept nothing as constituting a falsification of their theory. So, it is also unfalsifiable in practice.

Let’s look at an example. Imagine that I am a Sagittarius, but that I don’t have typical Sagittarian personality traits. Would this qualify as a means of falsification? I think that astrologers would give little weight to this counter-example, as they would likely say that one’s birth sign gives nothing more than a tendency to have certain personality traits. There will always be cases of people not following that tendency, they would say. So, I think that this example is a very weak one, and really counts for little. Astrological theory allows for this kind of anomalous data, so it would certainly not be accepted in the astrological community as constituting some sort of a falsification.

How about a much stronger example? Astrological theory says that the accuracy of its predictions increases dramatically when we consider the personalities and lives of people born at almost the same time. Accordingly, there was a well-known and rigorous study done of 2000 people born within a few days of each other (70% born within 5 minutes of each other), in which their personalities and lives were analysed statistically to look for the types of correlations that would be predicted by astrology. However, no such correlations were found. A sure falsification you might think? No such thing I’m afraid – the astrological community ignored or attempted to explain away the findings by ad hoc means. Within the vague and equivocal framework of astrology, I can think of no more clear falsification of astrological theory than this (although there are plenty of other studies with similar results), but the astrological community refused to accept it as such. So, even when it is possible to pin down the types of vague and ambiguous predictions made by astrology, and show them to be failed, falsification is rejected. Therefore, falsification is impossible in practice within astrology.

So, to summarise, we have two problems with falsification in astrology. Firstly, the theory is sufficiently vague and imprecise that it is difficult to frame clear, unequivocal, and bold tests that would allow us to falsify astrology. Secondly, any reasonable attempt at such a falsification, as in the study cited above, will not be accepted as a falsification. So, in practical terms, astrological theory has been rendered unfalsifiable. This is in clear contrast to science. Whatever science’s epistemological limitations, scientific theories do almost always make clear and unambiguous predictions that can and do allow for their falsification. For example, as the biologist J.B.S. Haldane said, evolution would be disproved by the finding of "Fossil rabbits in the pre-Cambrian". This is a little simplistic, as scientists would naturally and justifiably look for alternative explanations for this anomalous data (justifiable in this case as evolutionary theory is supported by a huge amount of data, unlike astrology). However, there would come a point at which scientists would just admit that their theory is wrong. Nothing analogous ever seems to happen in the world of astrology.

Whilst scientists might and do resist such falsifications, the history of science shows countless examples of scientific progress through theories being falsified and replaced by better ones. So, the scientific method is a truth-seeking one that progresses despite the limitations of individual scientists and of other epistemological debates. By contrast, astrological theory has remained almost set in stone since Ptolemy’s day, despite huge problems with its methodology (no proposed physical mechanism for these supposed planetary influences, problem of the precession of equinoxes etc.) and its predictions. So, to summarise, astrology would clearly fail Popper’s criterion of falsifiability, and would therefore not be classed as a science.

But, at this point an interesting question to ask is whether falsifiability alone is sufficient to allow us to demarcate science from non-science. One problem with this idea is that science often seems to progress by verification, rather than falsification - scientists aren’t always looking to falsify theories but, rather, to verify them.

Nevertheless, I think that it is a fundamental part of the scientific method that physical theories should be falsifiable - even if this is not what physicists are inclined to do. Of course, physicists will we wedded to their preferred theories, and will look to verify these theories, and try to avoid any possible falsification - to the possible extent of ignoring or 'fudging' anomalous data. However, this is a human failing, and not a failing of the idealised method of physics, in which I think that properly constituted theories should offer some means of falsification, and falsification should be attempted. I think that one important aspect of the scientific method here is that, whilst physicists might seek to avoid falsification of their pet theories, other physicists will be attempting to achieve that very falsification, in order to push alternative theories i.e. we have peer review.

Of course, the theories might not falsifiable yet e.g. with the proposed existence of the Higgs boson as predicted by the Standard Model. However, even such esoteric cosmological theories as Turok and Steinhardt's colliding branes offers some means of falsification - the detection of gravitational waves from the creation event would falsify it, for example. I think that most physicists would not consider a theory to be a properly scientific one if it offered no way of ever being falsified, even in principle.

Of course, scientists are not just going to discard a theory when some anomalous data turns up. They will try to introduce some additional, possibly ad hoc, element in order to explain away the mismatch between theory and observation. This happened with the discrepancy between the observed orbit of Uranus and what was predicted by Newton’s theories. Scientists got around this discrepancy by proposing the existence of another, unknown, planet that was influencing Uranus. This actually turned out to be correct, and was the planet Neptune. So, you might ask, what is the difference between a scientist doing this and an astrologer (or creationist etc.) doing it? Well, the difference is that in the case of science, these ad hoc elements introduced to explain away a mismatch will likely make the theory more falsifiable, not less. Also, science doesn’t always do this, whereas pseudoscience seems to do nothing but introduce ad hoc elements into their theories in order to explain away discrepancies. By doing this, the pseudosciences are rendering their theories effectively unfalsifiable.

In the final analysis, I think that Popper was on the right track with his concept of demarcation based upon falsification, but I think that there is more to it than this. I believe that falsification is a necessary but not sufficient condition for a theory to be considered properly scientific, and I think that we need to add some criteria in order to properly demarcate between science and pseudoscience. For a theory to be considered scientific, I would suggest that we would wish it to have most or all of the following properties: consistency, parsimony, falsifiability, grounding in empirical evidence, reproducibility, tentativeness, and correctability.