The Orion's Arm Universe Project Forums





The case for Autointerdiction.
#1
As far as we can tell right now the odds of us getting invaded by hostile beings from another world are zero. The odds that beings from another world will wipe us out with a giant planetbuster kinetic-kill missile, likewise.

But as soon as we put humans onto another world those odds are no longer zero. Give them a few hundred years, or a few thousand, and their descendants will be numerous, and no longer be loyal to us or answerable to our laws, and how much longer than that will it take for wars to start breaking out?

So, given that knowledge, why would we ever allow any colony of human beings capable of surviving off earth, to leave? Ever? Under any circumstances?
Reply
#2
This is one of my favourite solutions to the Fermi Paradox. No sentient species should colonise a distant object, if there is any risk of that object attacking the homeworld. It is mentioned on the site, under the heading Light Speed Paranoia
http://www.orionsarm.com/eg-article/4f82cdbe9e378

I hope, and expect, that a set of truly intelligent entities would find some way of avoiding the paranoia trap, but as Slartibartfast once said, everyone in the universe is paranoid (and sometimes with good reason).
Reply
#3
Various reasons:

1) We already have the means to effectively destroy our civilization/species a nuclear war or via biological war or terrorism and the odds of that are already not zero. So in some respects that ship has already sailed.

2) If you live in a country significantly weaker than superpower category then there is already a finite chance that another culture could invade you and crush your civilization. So, again, in some respects that ship has already sailed, at least for much of our current population.

3) Because the universe is a dangerous place and sooner or later, whether in the form of an asteroid, a GRB, a supernova, colliding neutron stars, or a super-volcano eruption (Yellowstone is ticking) or something else we haven't discovered yet, our planet, and possibly our solar system is going to get hit with an event sufficient to destroy our civilization and/or our species and the only way to prevent/survive that will be to have offshoots of our species somewhere else (preferably many somewhere elses).

4) Because eventually the sun is going to go red giant, and the only way to survive that is to have some portion of our species somewhere else, and/or to have the means to move our species somewhere else (or move in a new star or re-engineer the one we have). The means to do any of that either grows out of developing interplanetary and interstellar travel or learning to manage (technically and socially) mass and energy flows that could wipe out life on Earth or allow someone to conquer it.

5) While it may currently appear that there is no one out there who could threaten us, that situation could change at any moment and until/unless we have the means to either match their abilities or escape them/spread ourselves too widely to be completely conquered or destroyed, we are sitting ducks (see paranoia is a two way street).

My 2c worth,

Todd
Reply
#4
1) Sure, we have the means to destroy our civilization and the odds are nonzero. But by creating a population that would not be destroyed you create a population that would have less incentive to refrain from using it. Good for the human race as a whole, in terms of survival - but not good for the people on the original world who are making the decision about whether to let anyone leave. IOW, even if you increase the odds of civilization surviving, letting anyone out would be lowering the odds of your own descendants or your own home surviving.

2) Sure there's already a chance of being invaded by others on earth. Why would that make it a good idea for anyone to make the chances worse?

3) Sure, there's a significant risk that a GRB, supernova, colliding neutron star, etc, could sterilize earth. But the decisions are being made in the interests of existing nations, not in the interests of the survival of life as a whole. Adding human beings to the set of potentially hostile forces away from Earth is still adding to the set of potentially hostile forces that can attack Earth. The calculus of nations doesn't care if someone *ELSE* survives.

4) Yes, eventually Earth will be destroyed. But A, that's a long time from now, and B, considering the time scales involved the narrow-sighted little buggers probably see earth being subject to destruction in 1% or less of the time that would take as a result of releasing humans from here. So, from the 'Dr. Strangelove' perspective Earth survives 100 times as long if they don't let anyone go, and the fact that this dooms humankind to extinction is merely collateral damage.

5) Creating a real, definite, new existential threat in order to deal with a hypothetical and potentially nonexistent threat? Let's ask Dr. Strangelove again. The answer isn't just no, it's Hell no.

I hate this, but if you take the short-sighted, narrow view of preserving Earth and her existence over all else, the logic is pretty damn compelling. And our decision makers have always been all about short-sighted and narrow.
Reply
#5
(01-17-2017, 02:36 PM)Bear Wrote: 1) Sure, we have the means to destroy our civilization and the odds are nonzero. But by creating a population that would not be destroyed you create a population that would have less incentive to refrain from using it. Good for the human race as a whole, in terms of survival - but not good for the people on the original world who are making the decision about whether to let anyone leave. IOW, even if you increase the odds of civilization surviving, letting anyone out would be lowering the odds of your own descendants or your own home surviving.

Firstly, with biowarfare you can already do this in principle. Inoculate your population and kill off everyone else with a disease. I don't see any real difference myself - dead is dead.

Secondly, this seems to me to be an excellent reason to make an effort to see to it that your descendents (at least some of them) are among those living elsewhere.

Thirdly, if you've got the tech to send significant numbers of people elsewhere, you can probably protect Earth from most threats they could pose, or at least have the means to strike back - so the issue of distant populations being able to act with impunity doesn't really exist. It may take a while for retaliation to arrive, but it can arrive if people are determined about it.

Coming at this from another direction, if you have many different colonies all over the place, it is not in any of their interests for one or a subset of their number to go around destroying any of the others. Doing so would likely result in retaliation in kind. So a form of MAD might be in effect.

(01-17-2017, 02:36 PM)Bear Wrote: 2) Sure there's already a chance of being invaded by others on earth. Why would that make it a good idea for anyone to make the chances worse?

Why would anyone actually want to invade? Much of SF notwithstanding, planets are really not great places to live for any number of reasons. Odds are that any kind of serious colonization project would involve some form of space habs making use of resources in much shallower gravity wells than Earth. To such people, mucking around in a deep gravity well, while dealing with earthquakes, hurricanes, tornados, and various parasites and pests that their ancestors might have left behind would seem likely to seem rather pointless.

And again, if you are going to be bothered by being invaded, then the odds seem much greater that it will happen from a nearby nation. Does that mean you should take pre-emptive action to eliminate everyone else? Some might think that way, but most don't seem to.

(01-17-2017, 02:36 PM)Bear Wrote: 3) Sure, there's a significant risk that a GRB, supernova, colliding neutron star, etc, could sterilize earth. But the decisions are being made in the interests of existing nations, not in the interests of the survival of life as a whole. Adding human beings to the set of potentially hostile forces away from Earth is still adding to the set of potentially hostile forces that can attack Earth. The calculus of nations doesn't care if someone *ELSE* survives.

It's a virtual certainty that one or more of those natural events will happen and would destroy existing nations along with everything else (assuming existing nations still exist by the time we get colonizing anyway) and none of those natural events can be reasoned with, nor can some of them be defended against without putting humans elsewhere, at least with any tech or lifestyle choice we foresee now. Humans living elsewhere can be reasoned with and defended against, therefore they are the lesser threat.

(01-17-2017, 02:36 PM)Bear Wrote: 4) Yes, eventually Earth will be destroyed. But A, that's a long time from now, and B, considering the time scales involved the narrow-sighted little buggers probably see earth being subject to destruction in 1% or less of the time that would take as a result of releasing humans from here. So, from the 'Dr. Strangelove' perspective Earth survives 100 times as long if they don't let anyone go, and the fact that this dooms humankind to extinction is merely collateral damage.

So the 'narrow-sighted little buggers' find multi-million or billion year timescales too long but thousands or tens of thousands of years too soon. Very specific narrow sighted little buggers you're postulating hereSmile You're also falling into the logic trap that anyone and everyone in a position of leadership is of a mindset to think this way and that this is an inherent property of all leaders everywhere forevermore.

(01-17-2017, 02:36 PM)Bear Wrote: 5) Creating a real, definite, new existential threat in order to deal with a hypothetical and potentially nonexistent threat? Let's ask Dr. Strangelove again. The answer isn't just no, it's Hell no.

It's not a real or definite threat - it's just as hypothetical as aliens invading or destroying Earth. So why is one hypothetical more 'real' than another?

(01-17-2017, 02:36 PM)Bear Wrote: I hate this, but if you take the short-sighted, narrow view of preserving Earth and her existence over all else, the logic is pretty damn compelling. And our decision makers have always been all about short-sighted and narrow.

Firstly, an argument can be made that never leaving the Earth is likely to result in our destruction via human factors that could act much more quickly than natural ones.

Second, it is already demonstrated that natural factors can (and eventually will, unless we do something about it) destroy or severely damage our civilization and species - vs - the potential of other human cultures doing so (while presuming that we would not be able to do anything about it in one form or another, which doesn't necessarily follow).

Third, the idea that decision makers have always been all about short-sighted and narrow, while popular in Western civilization for some reason, is demonstrably not true. The founders of the US were certainly trying to design something to last over a significant historical time frame. The builders of the pyramids or the Great Wall of China tackled projects that took generations, at least. Same for the builders of the great cathedrals in Europe. Probably plenty of other projects and examples elsewhere on the planet as well.

Taking the view that a particular common mindset in the here and now is the only way people can or have ever thought isn't supported by the evidence. For that matter, thinking that that mindset is the only way that current leaders think probably isn't supported by evidence either.

Todd
Reply
#6
All of these things are missing the main point.

Before allowing humans to leave: no risk from humans who have left.
After allowing humans to leave: that risk exists.

And I think that's where the laser-focused little bean-counters will stop looking.

How its severity *compares* to other risks is immaterial. Whether there are *similar* risks is immaterial. Whether it reduces some other risk is meaningful ONLY if the reduction is greater and shorter-term than the risk it introduces. When someone is talking about the risk of traffic fatalities, it's kind of silly to point out that the risk of airline fatalities is worse if people don't wear seat belts.
Reply
#7
"An argument can be made that never leaving the earth is likely to result in our destruction..."(by risks resident here on earth)

Destruction by those means - for someone who thinks of "Us" specifically and solely as the population of Earth, full stop, has exactly the same probability whether humans exist elsewhere or not. The kind of national-interest obsessed bean-counter making these decisions does not give a damn whether humans elsewhere survive because humans elsewhere have nothing to do with the national interest.
Reply
#8
(01-18-2017, 06:33 AM)Bear Wrote: All of these things are missing the main point.

I disagree. All of these things are addressing your main point in various ways. I would also point out that you aren't arguing against the points I've raised, but have instead simply dismissed them - which really isn't answering them or providing countervailing data or arguments. Anyway.

(01-18-2017, 06:33 AM)Bear Wrote: Before allowing humans to leave: no risk from humans who have left.
After allowing humans to leave: that risk exists.

And I think that's where the laser-focused little bean-counters will stop looking.

How its severity *compares* to other risks is immaterial. Whether there are *similar* risks is immaterial. Whether it reduces some other risk is meaningful ONLY if the reduction is greater and shorter-term than the risk it introduces. When someone is talking about the risk of traffic fatalities, it's kind of silly to point out that the risk of airline fatalities is worse if people don't wear seat belts.

I see some flaws in your logic:

a) You are ignoring the fact that humans do not simply assess risk in a vacuum, nor do they assess any and all risk as being an existential or infinite one without any counterbalancing benefits. Or to put it another way: You are ignoring the benefit side of the cost-benefit analysis. Presumably anyone considering sending out colonies, either to other planets or other stars, will be doing so because they feel there is a net positive to be gained by this. That there is the potential for future negative 'costs' may be considered, but if they see the benefits as being near term and/or concrete and the potential hazard as being distant and hypothetical then they are just as likely to go with the benefit and let the potential cost take care of itself - especially when that potential cost is hundreds or thousands of years in the future and is not a sure thing.

b) There are various historical precedents to support the idea that the creation of potential rivals or threats will not prevent 'bean counters' from going ahead and doing something. For a major example, consider the various colonial powers of yesteryear, in particular the British Empire. By the same argument you are making here, none of these powers should have ever risked colonizing the new world. But they did it anyway because they saw a benefit(s) in it that presumably outweight the cost(s). And indeed, the US, Canada, India, and Australia could pose a severe challenge or threat to Great Britain if they wanted to (the situation with other former colonies varies quite a lot with regard to their former colonizers). And there have been instances when they have been enemies. But there are also many many periods where they have been/currently are allies and friends (insofar as nation-states can be friends). Unless you are going to argue that laser focused bean counters didn't exist in the British Empire or other colonial powers of the day?

c) In my first point, I mentioned 'sure things'. Humans have a long history of doing all kinds of things that all the available data says has a high probability of being bad for them, either because they find the 'benefit' to outweigh the potential cost or because they think the odds will work out in their favor or for some similar reason. Whether this could be classified as 'foolishness' or 'hope', humans (including bean counters) demonstrate it all the time and have all through their history.

d) Even in cases where there is absolute recorded proof of how much of a risk something can be, humans will often go ahead and do it anyway. 9/11 demonstrated how commercial jets can destroy entire buildings and kill thousands of people. Tens of thousands of people die every year in car accidents or due to gun violence. However, the response to these things has not been to cease the use and production of commercial airlines, cars, or guns. Rather, people take steps to try to prevent or mitigate any potential downsides, mainly because they don't want to give up the various upsides/benefits that also flow from the devices or situations in question. Or they feel the 'cost' or risk of the loss of these things outweighs the cost/risk of keeping them.

e) Finally, you mention bean counters not caring about anyone not of their nation. The simple answer to that, at least for interplanetary colonies is to consider their inhabitants to be members of the nation that founded them, with all the rights thereof. As such, the bean counters would (by your own logic) care about them as they do their own citizens. Interstellar colonies would be hard pressed to consider themselves part of a founding nation in a lightspeed limited universe, but the risk they pose is also likely to be considered minimal for the same reason. Why would two different star systems want to go to war anyway?

Todd
Reply
#9
Todd: There doesn't have to be a rational reason to go to war. In fact, there almost never is, at least for the aggressor. (Going to war when someone has already gone to war with you is not only rational, but vital for national survival.)

I can think of many reasons for interstellar war. One is paranoia; "do unto others before they do it to you". Another is fanaticism of various sorts; currently, religious fanaticism is the most likely excuse for war on Earth, and fanaticism regarding political systems caused many wars in the 20th century.

Another one is accidental war. It's more complicated than this I know, but WWI was partially accidental and caused by the ever-growing accumulations of war material in all countries involved; unfortunately, such buildups are self-reinforcing. And we came very close to accidental thermonuclear war at least twice. (Poorly written software, mostly.)

Taking this to an interstellar civilisation: Without any fom of FTL, to make any sort of coherent interstellar society work requires relativistic speeds and the associated gigantic energies. It doesn't take much of a course change to turn a ramscoop freighter into a world-wrecking weapon. The energy content of a vehicle at 0.9c is comparable to the same mass of antimatter, and less of the energy is wasted in a collision. A 1000-ton ship at such speeds works out at roughly 132 teratons of TNT equivalent, roughly the same as the Chixculub impact. Realistically, freighters would probably be much bigger than that; after all, they have to sustain the crew for decades.

(This is old news to all here, of course, but worth re-stating I think.)

So what do you do, when you see a ramscoop drive light up at your next-door neighbour, headed your way? After all, you can't be sure the ship will make turnover at midpoint.
Reply
#10
(01-21-2017, 06:54 PM)iancampbell Wrote: Todd: There doesn't have to be a rational reason to go to war. In fact, there almost never is, at least for the aggressor. (Going to war when someone has already gone to war with you is not only rational, but vital for national survival.)

I'm not seeing how this statement is relevant to my points above. Are you responding to something specific (in which case, please point out which one it is) or making a general statement?

Regardless, the same statement can apply to any war, anywhere. As I've already pointed out, a civ limited to just this planet can have any number of methods of killing itself - some of them vastly simpler and harder to defend against than an attack from another solar system - so why is the potential for an eventual attack any different or more of an issue? Biotech or AI research or nanotech could eventually create the means to destroy humanity. Should we therefore ban all research into these fields because there is a potential risk regardless of any and all potential benefits? If we're going to go down that road, then shouldn't we ban all further tech development because we can't predict what could come out of it that could destroy us? Note that even apparently 'safe' tech can fall under this umbrella - various chemicals and the internal combustion engine were long thought to have no global scale downsides - hellooo climate change and contaminated water and such.

Of course, if we give up tech then we put ourselves at the mercy of the universe (which has none) and that can eventually destroy us just as certainly, if not more so. So Catch - 22.

(01-21-2017, 06:54 PM)iancampbell Wrote: Taking this to an interstellar civilisation: Without any fom of FTL, to make any sort of coherent interstellar society work requires relativistic speeds and the associated gigantic energies.

Actually most people would argue that any kind of coherent interstellar society in the mode of an Earth based nation or empire is impossible in a slower than light limited universe. OA gets around this by a combination of memetics (propaganda and marketing on steroids) and presuming that civs in the setting are much more loosely organized than any planet based civilization.

Also, if sheer speed is your main criterion than radio or optical data links are the way to go over starships for keeping your intersetllar culture at least somewhat linked together and cohesive.

It also seems unlikely that high relativistic speeds would make much of a difference. A 20ly trip will take 20 years for the start and end points no matter how close to light speed a ship can make the journey. And, barring a major tech advance, relativistic speeds of a level sufficient to produce any kind of significant time dilation are not going to be the first thing achieved, so early ships will go considerably slower and take much longer.

As far as throwing gigantic energies around - our civ already deals in energy levels that dwarf the achievements of earlier cultures - and we also see a certain amount of death and destruction as a result. We continue to do it anyway because:

a) We find the benefits of having such energies to outweigh those costs.

b) We're used to doing it. Just as we take it for granted we can throw around X amount of energy without undue fear, future civs will presumably throw around even greater energies and not think twice about it. They will have appropriate safety protocols and such that they have developed - but they will still consider the benefits to outweigh the costs.

(01-21-2017, 06:54 PM)iancampbell Wrote: It doesn't take much of a course change to turn a ramscoop freighter into a world-wrecking weapon.

Actually it's not that easy. Adam (inventor of our wormholes and reactionless drives) weighed in on this many years ago. At high relativistic speeds, Lorentz contraction and distortion of the star field makes hitting something as small as a planet (space is very very big) very very difficult if not impossible if the ship is just left to its own devices. Hitting space habs would be even harder.

You could set up beacons or the like to make it easier for the ship, or have the ship go slower (which makes it easier to detect of course), but in that case you're using a relativistic strike as a coup de gracie rather than a surprise first strike - so a different situation entirely.

(01-21-2017, 06:54 PM)iancampbell Wrote: Realistically, freighters would probably be much bigger than that; after all, they have to sustain the crew for decades.

Why would you need a crew at all? If you can build ships of this kind you surely have automation that can let them fly themselves.

(01-21-2017, 06:54 PM)iancampbell Wrote: So what do you do, when you see a ramscoop drive light up at your next-door neighbour, headed your way? After all, you can't be sure the ship will make turnover at midpoint.

Lots of options:

a) Spread your civ as widely as possible so it become impossible for any one group to be sure of 'getting everybody'. An accident then becomes unfortunate, but not a species or civ ending event.

b) Communicate as widely as possible all the time so secret military buildups and attacks become harder and so that, if one does somehow take place everyone else knows about and can retaliate against the perpetrator.

c) Set up really good telescope and interferometer systems and watch other star systems like hawks. When you see a drive flare, keep an eye on it and confirm that it has started slowing down when it should. If it doesn't or if it goes out, go on alert, let everyone know and shift the orbits of your habs so they are harder to hit. Consider spreading clouds of dust and gravel around your planets or along the presumed path of the incoming ship to up the chance of it hitting something too big for its shields and being destroyed. Use weather machine tech on your planet to fire a multi-petawatt laser at the ship, which should do bad things to it or any incoming projectiles. Probably other options as well.

d) Set up ships in deep space as a retaliatory force. If an attack takes place they rev up and launch toward the attacker. They are detectable in flight, but if we're postulating this kind of thing anyway, the attacker is presumably no more able to stop the incoming ships than you could. MAD is a quite sensible policy with a demonstrated track record of effectiveness, really.

Etc.

Going back to OA, Ithuriel did a lot of work on starship drives a year or two ago and one of the things he concluded is that high acceleration sustained for long periods runs into problems even with amat or conversion systems. So starships may be more likely to accelerate at fractions of a G and just be able to keep that up for a long long time rather than rapidly boost up to high speed or boost constantly for a whole interstellar trip at 1G or the like. The waste heat issues seem to put a block on this kind of thing.

Todd
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)