PAUL BIRCHAPOCALYPSE NOT YET
In "The Menace of the Apocalyptic Individual", Libertarian Alliance Political Notes No 164, Brian Micklethwait asks whether freedom might not permit psychopathic individuals to destroy mankind. The short answer is: yes. He also asks whether libertarianism may therefore be just too dangerous a creed for the coming millennium. To which I answer: not if we're reasonable about it.
There are ways in which small groups or individual persons may in the future be capable of inflicting damage on a cataclysmic scale (more on this below). It would however be a major error to suppose that any form of political organisation (or lack thereof) can eliminate this danger. Risk is the inescapable and largely unpredictable concomitant of the historical process; even the ultraconservative dinosaurs could not avoid it.
Nevertheless, a free market would unquestionably heighten the risk of catastrophe in the near to medium term. A free market is dynamic. It accelerates change. It creates novelty, unpredictability. It is therefore dangerous.
However, a free market is also efficient at creating solutions, and certainly the threat of apocalypse is a problem (or universe of problems) to which solutions will be sought. So any given threat to which a solution is at all possible is likely to be short-lived. Are there any threats to which there is no possible defence? We don't know, but if there are we've probably had it whatever we do; there can be no way of making sure they never happen.
The bottom line is that there can be no absolute guarantees, but that a free society is still our best bet.
2. Dangerous Technologies Can't Be Banned Effectively
Whenever science is on the verge of new and potentially dangerous technologies the environmentalist lobby raises a clamour to suppress them. At the moment the no-nos include human cloning and genetically modified crops. Invariably the assumption is that if only the government passes a law banning the use of this technology, the problem will go away. To put it mildly: this assumption is flawed. The dangers inherent in these new technologies (however great or small these dangers may be) cannot be wished away by legislation, because legislation cannot entirely prevent their use or development.
Note that this is not the same as the argument that laws prohibiting, say, prostitution won't work, because you can't stop people visiting prostitutes. On the contrary, such laws, adequately enforced, can indeed effectively reduce the practice of prostitution to acceptably low levels; they work because complete success is neither required nor demanded.
By contrast, laws against "dangerous" technologies have to work perfectly, or fail absolutely. That deadly virus must never be released. That superhuman genetically engineered monster clone must never be created. Pandora's box must never be opened. And this is something no law can guarantee.
The most such legislation can do is slow the process down. Face it. Sooner or later humans will be cloned, genetically modified crops will be released into the wild, terrorist organisations will get their hands on nuclear bombs, lone psychotics will send man-made diseases through the mail or release nerve gas in department stores.
3. Banning New Technologies Increases The Risk
It gets worse. Banning new technologies increases the long-term risk. Why? Well, suppose you're a scientist researching genetically modified wheat and you're denied permission to continue or expand your experiments. What do you do? Perhaps you just give up and do something else instead. Or perhaps you struggle to overcome official resistance and eventually get to do the work. But perhaps you simply say "stuff them!" and carry on regardless. To avoid interference you keep your work secret or take refuge in some third world country. Either way, you are working under such disadvantages that you are unable to take the reasonable precautions you otherwise would. Your modified wheat escapes into the wild and hybridises with the native flora. Pandora's box is open.
This is not mere speculation. It has already happened. The scientist in question was American. The third world country was Brazil. I can't remember what the crop was; it might have been wheat.
In this case it didn't matter. Despite all the hoohah there is little risk from the sort of genetic modifications currently being undertaken. We have been genetically modifying crops to make them more suitable for human consumption for thousands of years, because natural crops, like the original potatoes, have a habit of being dangerously toxic, even lethal, a trick of self defence those despicable plants long ago developed to stop us eating them.
However, let the genetic engineers become frustrated with the burden of bureaucratic interference and one of them could easily decide to produce something nastier; say, a variety of grass hardier than the common or garden variety, enhanced with the genes for tetradotoxin (puffer fish poison). Any cow that eats that stuff is dead meat. But revenge is sweet.
4. Banning New Technologies Gives Them To The Bad Guys First
In a free market, companies and individuals will mostly be developing new technologies with a view to benefiting themselves by benefiting other people. But if legitimate development and use is blocked, then darker motives must predominate. Technologies, or their nastier applications, will be developed for criminal purposes, for terrorism, for revenge by the paranoid or oppressed, or, like today's computer viruses, out of a mischievous or malicious nihilism — and, of course, by governments and their defence contractors for use against their own citizens and those of other countries.
I do not wish to claim that the development of weaponry for the defence of oneself or one's country is illegitimate, not even so-called offensive weaponry or weapons of mass destruction. One cannot hope for an effective defence without the ability to strike back at the enemy. Nor do I concur with the frequent claim that weapons of mass destruction are only of use to states and so would cease to be manufactured or developed in a libertarian society; on the contrary, I would expect a free market to froth over with super-cheap and ultra-powerful weapons, as with all other wonderful goodies. I see no difficulty in mass-producing thermonuclear devices at a few hundred quid a shot (electrically detonated spherical shell charge compacting a lithium hydride sphere with a highly sub-critical plutonium fission trigger at the core — demands very precise construction, but so do hard disks and other consumer computer parts). I'd buy 'em. Wouldn't you? There's nothing quite so much fun as designing a really devastating weapon, especially if it can blow up whole solar systems or galaxies (ask any Science Fiction author).
What I do claim is that in the free market most effort would be essentially defensive — directed first towards finding ways of protecting the customer against the violence of others. The offensive weaponry necessary to retaliate against or punish an individual attacker would in general not be anything like so sophisticated (if a terrorist threatens you with a nuke, you don't need a nuke of your own to deal with him — if you can find him, hitting him over the head with half a brick will do just as well).
5. You Have The Right To Own Any Weapon You Can Afford To Insure
This is what I believe to be, broadly speaking, the morally correct interpretation of the right to bear arms. You have the right carry, or to own, a sword, a sidearm, a machine gun, an artillery piece, a tank, a ballistic missile, a heap of hydrogen bombs or the starship Enterprise. There's no limit. So long as you also carry third party insurance covering all the risks your use or ownership imposes upon others, and for which you are thereby responsible. The right to bear arms is unlimited, but not absolute.
It's like driving a car, a dangerous activity that the law quite properly forbids you to undertake without third party insurance. Why insurance? Well, when you drive a car (or do anything else) you are the one responsible for your actions and their consequences; if you cause injury or damage, you must pay for it; you have violated another's rights. Now if the damage inflicted were always within your ability to pay, there would be no need to carry insurance; you could pay up as and when it became necessary. Unfortunately, major accidents can easily kill or maim quite a lot of people, or create a loss well beyond your means. If you are not properly insured you are thus violating people's rights by subjecting them to a risk you cannot cover. Insurance allows you to spread the risk, paying the predictable and affordable expectation value of the loss rather than the actual unpredictable and possibly unaffordable amount.
If you carry a firearm, it may go off accidentally and kill someone. You should carry third party insurance against that event. If you carry a grenade, it may kill half a dozen. The insurance premium will take that into account. If you own a hydrogen bomb the premium is likely to be astronomical — though perhaps you could bring it down to an affordable level by agreeing to store the device in an underground containment structure in an area of low population density, with good cryptographic and physical security for the arming mechanism, and keys divided amongst several persons to reduce the risk if you lose your marbles. Just like reducing your car insurance by agreeing not to drink.
Now notice that if we find ourselves wholly unable to reduce the danger of psychopathic misuse of these or any other articles to acceptable levels, it follows that we will be unable to meet the costs of insuring them. In those circumstances there can be no moral objection to banning them; indeed, there will be a moral imperative to do so; rights will be violated if we do not ban them (unfortunately, rights may still be violated even if we do ban them, since the ban is unlikely to be 100% effective). But if anyone subsequently works out how to diminish the risks to the point at which he can fund them, then his right to own or carry the articles is thereby restored.
6. Psychopaths Aren't Insane You Know
Well, actually, they are. But not as much as you think. Very few killers, even psychopathic ones, genuinely wish to kill the maximum number of people possible. Fewer than one percent of them try to kill more than a few dozen people, and almost none go further. That's not because they couldn't. From the dawn of civilisation it has been possible for a single berserker with a sword to run amok in a market place, or a nunnery, or a school, and slay many times that number. Nowadays, driving a car (better yet, a lorry) into a crowd at 70 mph should take out at least as many with less effort; and with a little careful planning anyone should be able to get his score well into triple figures.
But what do today's nut cases actually do? They put on a show, using a rifle or other firearm to shoot their victims one by one. Despite what the anti-gun media would have you believe, they don't simply blaze away indiscriminately on full automatic. Instead they choose their targets individually, leaving seconds or minutes between successive shots. For example, in the Hungerford massacre, the killer's average rate of fire was around one round per five minutes. This is typical, and falls far short of the full capability of the weapons employed.
The greatest numbers of victims of individual murderers are those of serial killers like Buhram, a ritual strangler of the Indian cult of Thugee, whose score in 1840 was at least 931, or the Countess Bathory of Hungary (1560-1614), the original female vampire, who in all murdered 610 young girls and was alleged to have bathed in their blood to renew her youth (source: The Guinness Book of Records). Even our modern terrorists have so far failed to improve upon these records!
These statistics should reassure us somewhat — though not entirely. Perhaps one person per hundred million per decade (or rather more in the United States) will kill a few dozen at one go. We can live with that — it's negligible. Furthermore, it seems likely that a free, prosperous and just society will produce fewer active psychopaths than a more repressive one. Yet surely there will still be some resentful lunatics; and out of these I guess that perhaps one in ten might genuinely wish to kill the maximum possible number of people. So in the United Kingdom we might hope to go a few hundred years between major incidents. A psychopath's hydrogen bomb might then kill up to a million people, corresponding to a murder rate around 10 per 100,000 per year — high, but not unprecedented, and still far lower than the death rates generally attributable to war and other state activities.
We could considerably reduce the number of casualties by better building design or similar techniques, but then the psychopath could plant more than one device (although psychopaths aren't renowned for effective planning) or use a bigger bomb. Or drive around the country squirting germs out of an aerosol. Or something. We can't be sure that that one total wacko per century won't succeed in wasting all of us.
7. Decentralisation As A Fallback Solution
If the capacity for evil of apocalyptic individuals cannot be contained, there remains at least one solution short of totalitarianism: decentralisation and quarantine. In the free market future, mankind will spread throughout the universe, for the most part in artificially constructed space habitats. No matter then how much destructive power an individual may control, he will probably be unable to do more than destroy the single settlement of which he is part. So if we limit (most) space habitats to populations of ten million or less, and impose fairly stringent controls and quarantine regulations on contact between colonies, the risk of catastrophe within any person's lifetime should remain acceptably low.
This is not an especially happy scenario, nor yet a catastrophically bad one. Indeed, it's probably better than it looks; no one need be forced into living in a small settlement, it's just that since larger ones will be more vulnerable most people will naturally choose to live somewhere smaller and safer, and to accept reasonable contractual limitations on their freedom of travel outside the colony. It's a bit like living in the country instead of a big city. Still, most of us would surely hope for a future with more freedom than this, especially when we realise that habitats with billions, even trillions of inhabitants are technically feasible.
8. Strict Insured Liability May Provide A Happier Solution
What I mean here is this: As I pointed out above, ownership of potentially dangerous articles carries responsibility for any resulting harm to others. Third party insurance to cover all such risks beyond your means is morally (and should be legally) mandatory. These risks include the possibility that you (or someone else) will become deranged and attempt to use the articles to harm others; or that they will be stolen and subsequently fall into the hands of terrorists or madmen; or that they will fall into the hands of people who do not understand the dangers; and so on. It is your responsibility to ensure that all the (foreseeable?) consequences of your ownership, direct and indirect, will be paid for. This does not mean that you must pay for all the consequences yourself (other people are responsible for their own actions too), only that you must make certain that someone will pay. You achieve this by taking out insurance.
But what if psychopaths can build their own antimatter bombs or nanoviruses from everyday objects? What if the household automatic everything-maker can also be programmed to churn out Terminator androids? Doesn't this vitiate the practice of third party insurance and the utility of strict liability? Not if we follow the logic through.
In the first place, since human beings are themselves dangerous animals, strict liability probably also implies that everyone should carry third party insurance against his own potential insanity, and if need be, reduce the premiums by subjecting himself to risk-reducing restrictions. You should also take out insurance before bringing a new human being, or man-eating tiger, or dog, or dinosaur, or android, or genetically modified crop into the world. You are not morally entitled to freedoms that impose risks you cannot afford.
More generally, these "everyday objects" must also be considered potentially dangerous articles, in that people can use them to construct articles dangerous in themselves. Thus their lawful ownership is similarly conditional upon third party insurance covering these further risks. As is the ownership of the tools used to fabricate these objects, and those used to fabricate the tools, and so on. Strict liability means that all ownership carries the responsibility of ensuring that any misuse, however indirect, is paid for.
What this means in practice is that the ordinary individual will only be able to afford direct access to devices and technologies where the risk of direct or indirect apocalyptic misuse is sufficiently small. More dangerous (and thus more costly) machinery will be owned mainly by large companies or government agencies, who can afford either additional security to reduce the risks, or higher premiums.
Again, this is not an especially happy conclusion. Many may feel that no one has the right to prevent them making innocent use of any technology whatsoever. I sympathise. But it won't wash. If your access to a technology threatens other people's lives, even indirectly, your use is not innocent. It is not harmless. Even in a free and just society, freedom is not unlimited, and rights must be bought and wrongs paid for.
Short of attempting to return to a subsistence economy of impoverished peasants (and even this has its dangers), the possibility of apocalypse as a result of technological progress cannot be excluded, and doomsday may only be hastened by the impetus of a free market economy. Nevertheless, I believe that a free market within an ultraminimal state also offers our best hope of averting catastrophe, of reducing the threat that rogue individuals or groups may pose to the future of mankind. Although extreme libertarians may demur, I hold that free market solutions fully consistent with justice and morality can indeed be found.
© Paul Birch, 28th Aug. 2000