The nature of universal moral law is considered. It is argued that simple formulations, including the utilitarian criterion of the "greatest good of the greatest number" are flawed; that codes of ethics such as the ten commandments, though useful, are only contingent; and that other descriptions appear arbitrary. To overcome these difficulties a new moral concept and a corresponding moral principle are presented: these are the Entropy of Choice, and the Maximisation of the Entropy of Choice.


If Moral Law exists, we might conceive it as either absolute or relative — as universal, or cultural, or personal. However, if it is in fact only relative or contingent, if it is nothing more than the outcome of a particular physical and cultural environment (that might have been otherwise), then although it may have pragmatic value, it can have no fundamental claim on our allegiance. For Morality to have any meaning, it must be Absolute. Or so I contend.

One assumption must be made at the outset: that free will exists. Without free will, there could be no moral law, only physical law. Worse, the whole edifice of reason is based upon the premise that human thought is more than mere computation (since a computer can be programmed even more readily to produce nonsense than to produce sense, yet has of itself no possibility of determining which is which, being as totally confident of its wrong answers as its right ones). This is not to suggest that human actions must be regarded as completely free, unconstrained by our physical or spiritual nature: rather, that they are not wholly predetermined; that some element of choice remains.

1.1 The Fallacy of Moral Relativism

Of diverse attempts at formulating a doctrine of moral relativism, secular humanism is among the most popular, comprehensive and organised. It is in tune with much "liberal" political thought and is the explicit or implicit creed of many scientists. It is however deeply flawed, its reductionist arguments self-defeating.

The Humanist Manifesto (Kurtz, P., ed., Humanist Manifesto I & II, Prometheus Books, Buffalo, New York, 1973) states ". . . we affirm that moral values derive . . . from human experience. Ethics is autonomous and situational, needing no theological or ideological sanction . . .", but later claims " . . . the individual must experience a full range of civil liberties in all societies . . . freedom of speech, political democracy . . . the principle of moral equality must be furthered . . . we believe in cultural diversity . . . we believe in equal rights for men and women . . . the world community must renounce . . . violence . . . we believe in . . . peaceful adjudication . . . the conservation of nature is a moral value . . . disproportions in wealth should be reduced . . ." (my italics).

Whatever the merits, or otherwise, of these humanist ideals, it must be observed that if morality derives only from human experience any individual is at liberty to reject them out of hand. It would make at least as much sense, historically, to claim ". . . the individual must be enslaved . . . civil liberties and diversity must be crushed . . . the world must resolve disputes through violence . . . the ascendancy of man is a moral value . . . disproportions in wealth should be increased . . . ".

Such sophistries, wherein the philosopher uses loaded words like should and ought to trick the reader into accepting one particular form of situational ethics, underlie all attempts at moral relativism (with the possible exception of honest hedonism, which in saying "do what you want" effectively rejects the notion of morality altogether).

1.2 The Ubiquity of Moral Absolutism

We are all moral absolutists: we all believe in right and wrong. This is as true of the secular humanist as it is of the devout Christian; for although one may espouse moral relativism in theory, it is scarcely possible to maintain such a position in practice. It is an empirical fact that men live (or attempt to live) by absolute ethics even when they claim only to believe in relative ethics. Indeed, there is a surprising degree of consensus on the everyday basics of right and wrong (see, for example, C.S. Lewis, Mere Christianity, Fontana (Collins), London, 1952, p15 et seq.); even contentious issues are usually disputed less on moral grounds than on questions of fact — will a policy actually create the desired result?

Of course, the existence of human conscience does not of itself prove that an underlying absolute morality also exists; nevertheless, this is the assumption upon which human beings almost always operate and as such is the second key premise of this essay.

1.3 The Necessity of a Moral Principle

If morality is absolute, then all moral law must ultimately derive from an absolute and invariable Moral Principle. In this sense, morality is universal, in that once all parameters of a well-formed moral question have been fully and truly weighed according to the moral principle, only one answer to that question is possible (this is not to say that only one option can then be morally justified, only that every competent judge will agree as to which options are right and which wrong).

Suppose the contrary: that several principles coexist. Either they never conflict, in which case they can be combined into a single principle (simply AND them together); or they sometimes conflict, in which case we require a deeper principle to decide between them. Unless there exists an infinity of such principles, in infinite regress, we are led finally to a single all-inclusive Principle (and even if there is an infinite regress, we may be able to find a single meta-principle that defines the combinatorial procedure). None of this entails that the Principle must be simple, but it would be hard to accept one as fundamental that was not.

Clearly, one cannot expect ordinary moral agents to have either the knowledge or the ability to evaluate every situation from first principles. As finite and imperfect human beings we naturally must formulate additional moral laws for our guidance, as a glossary upon the underlying moral principle (which, indeed, we may not even be able unambiguously to state).

Consider the commandment "Thou shalt not kill" (Exodus 20v15). Would any of us dismiss this as unsound? And yet . . . surely there are circumstances in which killing is justified. In war, say, or when a maniac with a chain saw is about to disembowel your little girl. That commandment more properly translates from the Hebrew as "Thou shalt not murder". And "murder" means "wrongful killing". So where does that get us? — "It's wrong to kill when it's wrong to kill". We have thus emptied the commandment of any content and still left ourselves with the question of how we know when it's wrong: to which our practical answer is usually that we "just know".

But the voice of conscience is not always clear: we have to balance right or wrong; the death of a maniac versus the death of a little girl; the death of a soldier versus the death of a nation. The applicability of a commandment depends on the circumstances; our ethics are contingent, derivative we assume of some deeper law.


If morality is indeed based upon a single absolute moral principle, then it is objective, consisting of rational inference from invariable principle to contingent application. Upon this "hangs all the law and the prophets" (Matt 22v40). Unfortunately, it has not proved easy to state exactly what this Principle actually is!

"Thou shalt love the Lord thy God . . . and . . . thy neighbour as thyself" (Matt 22vv37-39) is a precept running deeper than the ten commandments — but not quite deep enough. What does "loving thy neighbour" really mean? The parable of the Good Samaritan (Luke 10vv29-37) is sufficient to show how difficult it can be to decide.

2.1 The Fallacy of Simplistic Formulations

Following the ancient rules of the medical profession one may look to exhortations to "do no harm" or to "minimise suffering", or perhaps to the Golden Rule "do as you would be done by". Such simplistic formulations of the moral principle turn out to be impracticable (not even God could avoid causing some people some measure of temporary inconvenience), downright fallacious (one can minimise suffering by killing everyone painlessly now), or both (you just can't give other people everything you'd like for yourself, and they wouldn't always want it anyhow). The common difficulty with such formulations seems to be a lack of balance, a lack of comprehension of the complexities of existence.

More recent libertarian notions like "do not initiate coercion" are considerably better, but still inadequate (murder is initiating coercion, but so is pushing a child out of the way of a falling boulder); the best of these, in the admittedly biased opinion of its author, is "do not cheat", the epitome of common law (but even this won't do in every case). Besides, even common sense rules like these need to be explained before they can be put into practice.

2.2 The Utilitarian Criterion

The most famous candidate for the Moral Principle is the Utilitarian Criterion of the "greatest good of the greatest number" (often stated as the "greatest happiness of the greatest number" after, surprisingly, not Jeremy Bentham (1748-1832) but, according to the Oxford Dictionary of Quotations, one Francis Hutcheson (1694-1746)). Regrettably, this too turns out to be flawed, most fundamentally in that it fails to give any unambiguous method of measuring good or calculating the total.

In free-market economics utility is quantified as "revealed preference", that is, how much money people are willing to pay. For its purpose, this is an excellent formulation, cutting through the philosophical thicket of interpersonal comparisons; but even economists recognise that they are fudging, that revealed preference is not identical with the underlying utilitarian "satisfaction" (though undoubtedly highly correlated), that the marginal utility of money is not necessarily the same for all persons. Moreover, "revealed preference" only works in a free-market environment, yet the morality of the free market is itself often called into question; the argument becomes circular.

Most criticism of utilitarianism is misjudged; a patently absurd answer being derived by a failure to include all the relevant sources of utility. (Why should we keep our earlier promises, for instance, instead of maximising our utility now? — Because the act of keeping promises has its own utility, and a world in which promises are kept is one in which utility is more easily created). Without seeming to set up straw men or arguing at inordinate length it is difficult to pin down the inadequacies of utilitarian formulations of morality. For present purposes, however, it will suffice to note that the fundamental nature of "utility" is unspecified, that before a utilitarian calculus can be employed it will be necessary to provide an unambiguous and quantitative definition; in default of such a definition utilitarianism can always be analysed away as "maximising the good by maximising the good".

Consider also the problem of ten people with ten units of happiness among them: it will be widely agreed that it is better for each to have one unit of happiness than for one to hold all ten units, even though the total happiness is the same. This suggests that one should consider the distribution as well as the sum of good.

2.3 Rawls' Theory of Justice

It should hardly be necessary to point out the absurdity of the fully egalitarian position "make all equal" (which one can best achieve by killing everybody and leaving them with nothing). However, various authors (seminally John Rawls, A Theory of Justice, Oxford University Press, 1972) have attempted to define distributional laws of less extreme effect.

Rawls' theory can be summed up as "maximise the least liberty, then the least prosperity". I should point out that Rawls would probably not accept this interpretation; unfortunately, his own formulations are irredeemably vague. At no point does he give a precise and adequate specification of his "two principles" or his measures of good. This is not the occasion for a complete critique of Rawls' work (my copy of his 600-page book, which I read recently with a view to "checking out the competition", now contains some 90,000 words of my own marginal notes!); however, we may note his failure to provide any quantitative specification of the "index of primary goods" and the falsity of his belief that the "difference principle" eliminates the problem of interpersonal comparisons.

A Rawlsian "two principles" theory (cf. Rawls p302) can be stated more precisely as follows:

1) First: choose that constitution such that any other would reduce the least expectation of liberty;

2) Then: choose that distribution such that any other would reduce the least expectation of satisfaction.

If more than one constitution or distribution satisfies the principle then, subject to this constraint, apply recursively to the next-least expectation (and so on).

Such a theory, however, entails consequences that Rawls would not accept (which may go some way to explain the vagueness and inconsistencies in his work). Nor need we accept them. To abide by such principles would be to live our whole lives and sacrifice our whole pleasure for the sake of the minutest improvement in the condition of the least-endowed members of society (need they even be human, or should we sacrifice ourselves also for microbes?).

Far from solving the problem of interpersonal comparison, we have inflated it to absurd proportions.

2.4 Generalised Utilitarian Criteria

It seems clear that we require a principle precise enough to be given a mathematical formulation (how else will we be able to draw finely balanced distinctions?). How to proceed? We might consider maximising a function such as G=N<g> (2-<exp(1-gi/<g>)>) over a set of N persons receiving goodnesses gi, i=1,N, the angle brackets denoting expectation values. This has the right sort of properties. Unfortunately, it also seems rather arbitrary, G=N<g> (3-2<exp(1-gi/<g>)>) having similar behaviour differing in detail. Another suitable function might be G=N<g> (Pgi)1/N.

A further difficulty lies in attempting to provide a quantitative measure of goodness, which is probably not the same thing as happiness even in the long term (and even happiness is not easily measured). We might be more successful with a system based on ranks (A is better than B, etc.).

Let persons be given a rank ri in order of good. Then consider an action A which, if all other persons' good were held constant, would change that person's rank by di (note that these changes are not the same as the actual changes occurring under A, since in general Sdi is non-zero whereas Sdri is zero necessarily). The net change in good due to action A is then DG= Sdi. From the universe of actions Aj find the one that maximises DG and denote this as the morally correct action under the generalised utilitarian criterion.

Although this algorithm avoids much of the arbitrariness of an explicit mathematical form, it gives each step in rank the same weight and thus fails to take account of the range of possible good. Theoretically, one might hope to correct this by including in the ranking scheme the good of all potential moral agents over all possible histories. However, this would still leave unanswered the essential question of how good is to be ranked in the first place.


Having examined a variety of inadequate approaches and demonstrated that difficulties with them are unlikely to be easily overcome, we turn to a different approach. In philosophy, asking the right questions is half the battle. Let us ask some.

What defines a moral agent? — The ability to make conscious choices. What is a moral action? — One that is consciously chosen. When does a person choose right? — When he is also free to do wrong. An automaton may be employed for good or ill: should we praise the automaton for the former or blame it for the latter? — No: it is not a moral agent; it has no consciousness; it is a tool, and responsibility rests elsewhere. But what if a machine does display consciousness (or seems to)? — Then indeed we may hold it morally responsible. The question of choice — of free will — is thus fundamental to the very concept of morality.

Yet if to act morally is to act freely, and if to be free is to be a moral agent, ought not the maximisation of freedom, the maximisation of that very ability to act morally, be viewed as the end and deciding principle of morality itself? — It is hard to see what other reasonable basis there could be.

What then is choice — this root and arbiter of moral law? Not merely a philosophical notion, a metaphysical idea, a mystical hope, or a pious doctrine; it is a precise concept, rooted in mathematics and the theory of probability. Moral choice is of the mind, but its physical counterpart is indispensable in the natural sciences: quantum mechanics would be lost without it and elementary particles, the building blocks of the physical world, would not even exist. We find it throbbing at the heart of thermodynamics and pulsing through the arteries of communication; it controls the merger and evaporation of black holes and the vast expansion of the universe; it drives the chaotic engine of the weather and lies behind the intricate beauty of the Mandelbrot set.

What then is choice? — It is data: and the quantity of information is its Entropy. Each choice is a number: and the Entropy of Choice is the length of that number, the number of bits required to describe it.

Entropy is of such fundamental importance in the sciences that its extension to the moral sphere has a curious inevitability (or is it the other way round?). Understanding of entropy is as yet far from complete, but it is noteworthy that difficulties with the philosophy of the Entropy of Choice (notably the distinction between random choices and meaningful choices) have their counterparts in ongoing controversies in the natural sciences.

3.1 The Universal Moral Principle

The Universal Moral Principle (tentatively proposed) is simply this: "Thou shalt maximise the entropy of choice":


This is what it means. C is the entropy of choice: it is the sum, over all moral agents, of the natural logarithm of the effective number of distinct free choices; ln(N) is the entropy of choice available to each individual agent. The use of the natural logarithm (base e) follows the convention of the physical sciences (thermodynamics) and formal mathematics; in computer science and information theory it is more usual to employ logarithms to base 2 and measure entropy in "bits"; we may use whichever is more convenient, remembering that 1 "bit" or "binary digit" = 0.6931 "nit" or "natural digit".

Free choices are worth more than constrained ones; and unlikely actions are more significant than likely ones. How is this to be quantified? Information theory provides the answer: for each agent, evaluate Spiln(1/pi) over all distinct choices i=1,n, where pi is the probability of each choice. The value of an action, ln(1/pi), thus increases with its improbability; however, the overall entropy of choice is reduced, since Spiln(1/pi)< ln(n), all pi¹1/n. The maximum entropy, ln(n), obtains for perfectly free, equiprobable choices.

We need to discriminate here between microchoices and macrochoices (corresponding to the microstates and macrostates of thermodynamic systems). In so far as the moral agents are concerned, microchoices are indistinguishable; they make no difference. Imagine you are offered a choice from a plate of identical plain biscuits; the degree of choice is insignificant; but if a variety of biscuits were presented, the number of significant macrochoices would equal the number of distinct varieties.

We can go further by adding in the entropy of the distinctiveness of the macrochoices, the measure of their separation in phase-space. Imagine you are offered a choice of wines in a restaurant; if like me you can just about tell red wine from white, the entropy will amount to little more than one bit; but if you are a connoisseur, capable of appreciating all those subtle nuances of flavour, the entropy will be correspondingly greater, even if you are only allowed to choose from a limited selection.

Let us attempt to quantify this. If one is sensitive to m bits one can distinguish 2m states, corresponding to a potential 2m choices and an entropy of m bits. However, further choices exist, in that one may choose to ignore, or not bother distinguishing, one or more of the m bits, expanding the total number of choices to 3m and the entropy to 1.58m bits. Even if only one of the 2m states is actually offered, 2m choices of reaction and m bits of entropy remain (where an insensitive person would have no choices and no entropy). If 2r states are offered (ringing the changes on r bits) the effective number of choices is an intermediate 3r2m-r, with an entropy of m+0.58r bits (limited to 3m choices and 1.58m bits for r>m). More complex arrangements can be addressed through the methods of combinatorial analysis and probability theory.

Future choices should also be included; to the extent that they are uncertain, we must sum over all possibilities, weighted according to probability as outlined above.

To sum up: just how free ones choices are and just how much knowledge informs them directly determines their value. The deeper experiences of a prodigy carry a greater weight than those of a dullard; even so, those of a dullard are far from negligible. The complete entropy of choice encompasses all possibilities for all moral agents for all time; and it is this total that is to be maximised.

3.2 Some Consequences

The entropy of choice will actually have a definite value for every distinct set of possibilities, but exhaustive calculation is clearly impracticable; we possess neither the detailed knowledge nor the computational power — nor, if the truth be known, the inclination.