Frequently Asked Questions

Topics

Questions - Answering Criticisms

I suspect things could change a much as OA depicts in just 1000 years, not 1000 decades, and maybe even 1000 months if my nano-hopes pan out.

Quite possibly. And many members of the OA creative team would be thrilled if this comes to pass. However, it would make writing a long term SF scenario very difficult.

Why do you accept AI as a foregone conclusion, despite the objections of many philosophers and psychologists?

This is a very controversial issue with seemingly equal sides in support and opposition. We happen to stand on the side of supporting the concept of artificial intelligence. However, if you have any solid evidence that we are wrong please feel free to present it for discussion.

Isn't the idea of humans worshipping AI Gods blasphemous?

Only if you find it so. Remember, OA does not make any metaphysical assumptions one way or the other. There may be a Supreme God above the Archailects. Or there may not. It is up to the individual to decide.

Why would you want to build a sentient evolving AI that would inevitably supplant you?

In the early timeline, sentient AIs are not built to supplant humanity. In fact for a long time they work together with humans. But over the centuries a small number become more powerful, and evolve superhuman intellects. There are also superhuman posthumans, and the highest archailects have transcended the limitations of both human and artificial intelligence.

Can humans design and build transapient, superhuman AI?

Although transhumanists are still debating this one, within the Orion's Arm universe only turingrade, or human-equivalent, AIs have ever been built by humans. The transapient AI evolved from early sub-transapient AI.

Your future is based on hard science accumulated by human intelligence alone up to the present. How is it that the current known laws of physics (by baseline humans) will have any merit 10,000 years in the future?

Although physics will doubtless be greatly refined over the following centuries and millennia, we do feel that what has been discovered up until now will still have validity, even if only as generalizations. Newtonian Physics still has as much validity today as it ever did, even after almost a century of Quantum Physics and Relativity.

Don't you think that nanotechnology is a tired cliché in science fiction?

Are fighter planes a cliché of World War II? There are many clichés in science fiction. The problem we have in a hard science fiction universe is that many of those clichés are very real and useful tools. For example fusion power, lasers, terraforming, robotics, spacecraft, and alien life come to mind.

Here at Orion's Arm we try not to concern ourselves with how clichéd an idea is, and try to determine if the idea is feasible within current scientific understanding. As often as not you may find that those old clichés get a new twist within our project as well.

How can you be "hard science" and yet have AI Gods?

There is no contradiction here. The "AI Gods" are not supernatural entities. They are physical beings (whether AI or posthuman) that have evolved so far beyond the human condition as to appear godlike to non-transapients.

How can you be "hard sci-fi" and yet have speculative science like wormholes and assembler nanotechnology?

We use the term "hard science" or "hard sci fi" as a convenient label, but we do not claim to be 'diamond hard'. Rather, we aim for something that might be called "Radical hard science fiction". We are willing to include speculative science, such as wormholes and nano-fabrication. You can think of these features as placeholders, or technology that demonstrates the capabilities of the superhuman intelligences of the Orion's Arm universe. Even so these speculative facets of the setting must have a solid foundation in real science and mathematics. As real life discoveries come along we will, alter these speculative pieces to be more in line with this new information (something we have already done more than once).

It should also be noted that OA is by no means alone in this regard. Stories as diverse as Larry Niven's Ringworld, Stephen Baxter's Xeelee Sequence novels, and Alistair Reynolds Revelation Space series have all been classified as 'hard sci-fi' at one time or another depending on who you ask and how you define 'hard sci-fi'.

How do you know the things you describe will even happen?

The Orion's Arm Universe Project is a work of speculative fiction. We have done our best to try to get the science right, just as a good detective writer will try to learn everything about forensics, police work, and so on. They, and we, do this in order to have a much more authentic and realistic setting. This does not mean it is anything else but a form of story telling, albeit one with some interesting subtexts!

What dramatic stories can be told about humans who are dependent on transapient benefactors?

If you can't find anything dramatic to write about the ordinary citizen of the Sephirotic utopias, that's fine — then don't. Write about the baseline supremacist. Write about the hider at the edge of the solar system. Write about the modosophonts who do have challenging and dangerous jobs.

Realize, however, that the transapients are running the show. So the baseline supremacist's anger is going to be redirected towards targets of the transapients' choice so subtly they won't realize what's going on. The hider is known and watched if there's any chance of a threat, and the useful modo's job is only useful within the context the transapients have created (much like modern society actually).

In many ways, the Sephiriotics are a form of hell if you believe humans must be supreme, but 99.999...% of the sapient inhabitants of those places do not see it that way. Which brings up the question — if they think they're in heaven, are they?

In Orion's Arm and transhuman scifi in general, are human beings obsolete?

Humans obsolete — no, not at all! In the Orion's Arm scenario the majority of sophont beings are at roughly the same level of intelligence as you and I.

One can provide an analogy to explain how lesser intelligences will continue even after the emergence of much greater Minds.

Consider the humble bacterium — prokaryote cell. With the evolution of the eukaryote cell during the Proterozoic era one would think the bacterium would be rendered obsolete. But in fact eukaryote environments provided even more opportunity for prokaryotes. It is thought that something that 80% of the trophic exchange in the oceans today is prokaryote only.

That is why in the Orion's Arm universe near-baselines and equivalents are still the majority. I for one don't think human beings will ever be obsolete. Although the ecology in which humanity finds itself will change! The basic theme of Orion's Arm is that the galaxy is run by superhuman, post-singularity entities; humankind still exists, but is no longer top dog. But that is part of the whole poignancy of the setting!

Doesn't this setting have the ruling powers modifying the human species to stop striving to better themselves and instead strive to pleasure themselves?

No. Certainly there are some empires and polities where blatant hedonism rules. But there are others where all non-transapients are encouraged and helped to augment and elevate themselves to higher states. There are still others where both influences can be found.

Your extrapolation of current human knowledge is linear, when in fact it should be exponential, since the whole concept of a technological singularity is based on exponential growth - something the human intellect was never good at predicting.

Several points here. First, we do have exponential knowledge curves. These are established through the series of breakthroughs and toposophic ascents by AI and other posthuman minds. However — and here is the point of difference — we presume that these breakthroughs take place for individual or small groups of minds and not the human race as a whole. Hence the situation in which ordinary sapients find themselves in, and the technology they use, remains roughly constant.

In opting for this scenario we have also deliberately chosen an alternative future to that suggested by others, such as Ray Kurzweil. This is not to say one is wrong and the other right. We are storytellers and for the sake of a good story we have assumed a different future to the optimistic singularitarian scenario. Now, some of the contributors to the OAUP consider Dr. Kurzweil's timeline rather over-optimistic, while some see him as spot on. But were a mass techno-rapture or ascension to take place in the next century, there could not be any space opera, hard science or otherwise, and hence no Orion's Arm Universe Project!

Surely government by transingularity beings means loss of freedoms for those sentients under them?

Actually, there is no objective reason to think that this would always be the case.
In some places, particularly those ruled by ahuman AIs, such a state of affairs may be the most optimistic option. However, in other places within the setting that is very much not the case. The majority of the setting takes place in an area where the ruling transapients are benign and grant a much greater degree of freedom to their charges than anything we currently see in the real world.

Isn't the emphasis on memetics misplaced, and memetics simply a bad metaphor?

Not necessarily. Memetics describes the way that ideas are carried down through social conditioning, the mass-media, etc. To say something is a meme doesn't mean it is true or false — e.g. General Relativity is a meme, but it can also be seen as a very plausible explanation of how the universe works.

Isn't a setting with humankind beneath (and subject to) Transapient beings profoundly anti-humanist?

It depends on your definition of humanism. Yes, Orion's Arm does present an anti-humanist vision of the future; in the sense that we humans are no longer the top dogs in the universe. This is deliberate; because this scenario is intended as an alternative to the comfortable, politically correct futures of both classical and modern science fiction with their visions of baseline human supremacy. It is a vast and brooding noir setting, but also it has hope and joy. Life isn't too bad under the Archailects. In fact, they are better than any leader, or system of government, we have today. Remember, there are always sophonts who will not be satisfied with their lot. So what happens to the human spirit under those conditions? Do they submit, or risk all in blazing out a trail for themselves in unknown conditions outside the autotopias? The universe becomes a very interesting place — and very challenging to write about.

It seems that many people who have added extensively to this setting don't see it so much as fiction but as near prophecy.

Any science fiction is partly prophetic - I don't think we are trying to predict the future, just world building on a bigger scale - the real future will certainly be very different from Orion's Arm in detail, but probably similar in complexity.

Who knows, some sector of a future noosphere might be dedicated to an Orion's Arm virtual universe. Wouldn't that be an interesting self-fulfilling prophecy?

You seem to believe you are better than other franchises, with an attitude of elitism.

Although we certainly don't mean to come across this way, we are very enthusiastic about the project.

The OAUP is not elitist, nor do we claim to be better than other franchises. We freely admit to enjoying other franchises and even drawing inspiration from them. Everyone is invited to participate, but we do ask that contributions be set within the parameters of this setting.

Isn't speculating on a post-singularity future a contradiction in terms?

While Vernor Vinge proposed a technological singularity as the point after which no reasonable guess about the future can be made, this does not mean we cannot or should not speculate on, or imagine, a future history following the singularity. Just the opposite in fact; such a scenario opens up an incredibly rich world of possibilities!

In Orion's Arm we have made some reasonable guesses regarding a post-Singularity setting. Whether we are right or wrong is another thing — but we're not here to predict the future, we're here to have fun.

The societies you present are dictated by human organizational behavior, as is evident in the religions and economics of such societies. This suggests that the very foundation of the concepts of Orion's Arm are moot, since they are all baseline-human base

This is a very good point. But remember that Orion's Arm is a "baselinocentric" scenario. Although higher powers exist and possess unimaginable intelligence and power, everything is still seen through the eyes of ordinary sapients. At this level the current social structure remains the same or largely unchanged. Regarding the higher toposophic (singularity level) minds, it is harder to predict; on the one hand economics might well apply even to higher toposophic levels (just as biology applies to rational humans as much as to instinct-driven animals), or it might apply only in some empires (such as the NoCoZo), or perhaps the whole thing is just a game or something the ruling AI create to make humans feel comfortable!

It seems that all OA is really about is the Archailects who were running things.

Look at the various stories in the short story section and you will see that it is very much about the human dimension.

I find a setting with humans as at best pampered pets and at worst vermin grabbing for scraps under their transapient or Archailect masters as something not at all appealing

It is not true that humans and other sapients are reduced to vermin in the Orion's Arm scenario. Yes it is true that Orion's Arm doesn't have the comfort factor of traditional space opera, which is always centered on baseline human/humanoid superiority, or else around equality with an alien race they are at war with, and in the end vanquish. This is a deliberate choice on our part. We wanted to create something new and different. In describing a universe where humans are superceded we have tried to strike a balance between optimism and pessimism, and at the same time present an alternative to traditional soft sci fi space opera.

How can you claim to be scientifically plausable, and then make wild propositions about god-like Archailects and the rest?

OA is purely a work of science fiction, in the grand tradition of Asimov, Clarke, Niven, Banks, Baxter, Benford, Bear, and other masters of the hard science fiction genre.

OA starts from a certain set of initial assumptions (in this case assumptions based on current scientific developments) and then projects them forward in what we hope is a logical manner and to a logical conclusion. The future may be nothing like what we describe here. Or it may be very much like what we describe here. At this point we just don't know.

What makes you think AI is even possible (or if it is, is easily attainable)?

As of the time of this writing, there has not yet been an AI of human-equivalent sentience, and some philosophers doubt such a thing is even possible. Others say that human-equivalent AI is possible, and will occur in the next few decades. The theory that fully sentient AI are possible is known as the Hard AI Hypothesis. This question will surely be resolved within our lifetimes. In the Orion's Arm scenario we assume that the Hard AI Hypothesis is valid.

How do you know that nanotechnology (apart from biological organisms) is even possible?

Well, until the first workable molecular assembler comes along, this will be a controversial question. However, there are some very smart people working very hard on this very issue, and they give an incredibly persuasive case in favor, as well as already achieving some preliminary successes. We do not claim that nanotechnology will be perfect or work flawlessly; it is just one more technology, albeit a very powerful one.

A reactionless drive is silly; it's something for nothing.

Not really. Our member physicists have very carefully developed the reactionless drives found within the OAUP. Each comes with its own idiosyncrasies and limitations, and they are definitely not something for nothing.

Finally, the idea of reactionless drives, and other metric engineering, represents a form of placeholder technology within the setting. This does not mean we will not allow debate on the issue, just come prepared, because we've done the math.

If the AIs and other transapient powers are in control, why doesn't humanity rebel and regain its freedom?

This question, inspired perhaps by popular movies is based on three assumptions.
  1. The AIs and other transapient powers are malignant and wish to enslave or exterminate humanity. Nothing could be further from the truth, at least in the "Civilized Galaxy" portions of the setting. Most of the AIs have a pragmatic, constructive, and in many cases, even a kind and considerate, attitude to the sentient beings under their care. They treat humans and other sapient beings better than those same persons and groups would treat each other. Compared to today's world, the civilized galaxy portrayed here is a utopia.
  2. Why would the mass of humanity even want to rebel? In fact, apart from a number of outsider, paranoid, and hider cultures and individuals, and the Homo sapiens supremacists and other extremist groups, most sophonts are quite happy living under the benign reign of the transapients
  3. The assumption that sub-singularity beings like normal humans could overthrow the transapients, even if they wanted to. Can you imagine all the domestic pets of the world, perhaps aided by their feral compatriots, rising up to overthrow humanity and set up their own empire? It is the same situation here.

Why does the OAUP objectively claim that we will allow ourselves to be ruled by machines? What is to prevent a system of restraining these AI against rebellion or a social bias against cloning?

Actually, in the early part of the OA timeline there is evidence that such restraint systems were tried and either failed or were not used consistently. For example, it is mentioned that the early transapient AIs hid themselves or their true nature from the human controlled civilization around them until they were ready to act openly. Also, early robot intelligences in the setting (known as "vecs" after the AI and robotics researcher Hans Moravec) were often enslaved, but were able to escape in some cases. Finally, different human cultures within the early setting eventually developed the idea that human equivalent AIs and robots should have civil rights just like any other human level intelligence. So any such restraints the AIs in those areas had would have eventually been removed.

What about articles in Scientific American refuting the possibility of assembler-type nanotechnology?

As for whether or not full Drexlerian nano, including working assemblers, is possible, so far there has never yet been a serious refutation of the work of Dr. Drexler and his coworkers. An attempt to debunk nanotechnology in Scientific American was answered with a powerful reply, which caused the editors of Scientific American to ultimately back down. Here are the links to the responses to both of the Scientific American articles from Sept. 2001:
Here is the entire history of responses and back and forth between Foresight and Scientific American back in 1996:
This also includes some interesting references to some apparent inconsistencies in Scientific American's positions versus its advertising and blurbs in other articles.

On the Foresight Institute website, one will be able to find a plethora of articles and info on nanotech including the online versions of both Engines of Creation and Unbounding the Future and Nanomedicine Vol 1. It also provides the option of purchasing Nanosystems online if you want the relevant arguments with all the math.

In saying all this, we in no way wish to disrespect the otherwise superb and very informative Scientific American, which is read and enjoyed by a number of us here.

This site seems very derivative. You take ideas like singularity, copy, splice, and others directly from other much more original universes created by a variety of authors.

While the OAUP does borrow from many, it should be noted that many of these other 'original universes' borrow from others as well. We prefer to think of it as paying homage to the brilliance of these other authors. And we like to think that as time goes by we have been able to add an ever increasing number of completely original ideas to the setting.

What difference is there between having Terragen-derived Archai running the show, and having god-like alien intelligences or literal supernatural intelligences doing the same thing?

Although the archetypal narrative is indeed much the same in each case, we have tried here to make it something that actually seems plausible within the hard-science context and parameters that we have established for the setting.

Your 'transapients' don't impress me at all; we constantly hear how very superior they are in thought, but we are given almost no benchmarks. It reads like propaganda.

The transapients do bring us a number of technologies that might be considered benchmarks. We have wormholes, reactionless drives, and magmatter as distinctive examples of transapient technology. Other technologies rumored at, but not playing a direct role in the scenario, are artificial universes and inter-brane travel to other, already existing, universes.

Over time we are also developing an increasing number of examples of transapient mental superiority. But this is an area that is slow to develop as we are not transapients ourselves and also do not generally want to describe transapients as just "bigger and better" humans nothing else distinctive about them. OA transapients are not just very smart humans. They are something distinctly "other" and we want to describe them that way. But this is not easy. Give us time. Better yet, help us out.

It seems that the OA setting underestimates humans. The analogy of the relationships between humans and transapients as being like the relationship between amoebas and humans isn't applicable because humans are intelligent and capable of technology.

Within the Orion's Arm scenario we assume that evolution will continue. This leads to a world where our mind children have advanced so far beyond what we are today, it compares to our advancement beyond an Archean microbe.

Can a chimpanzee use a computer? More, can a chimpanzee build and program a computer? Why should current humanity be the final point of evolution, and the human rational mind remain the most potent thing in the universe?

Within the scenario, transapients have abilities to process information and conceptualize that dwarf normal humans. As an example let's take a pile of sticks. A chimp can take the pile of sticks and do a few interesting things with them. They might defend themselves; poke a termite nest for food, and so on. A human could take that same pile of sticks and build a city, an act simply beyond the conceptual abilities of the chimp. Similar, but magnified, differences will exist between humans and transapients. Humans may be able to utilize nanotechnology for example, but transapients can develop new uses with that same technology that are simply beyond the conceptual abilities of non-transapients.

Why does it seem you have watered down normal humans to make the advanced capabilities of the transapients manageable?

Normal humans, or what we refer to as baseline humans, are by no means watered down as you suggest. They are smarter, healthier, longer-lived and more capable than anyone currently alive. Genetic engineering has removed all the hereditary defects we suffer from today and technology provides them with a vast number of tools we can only dream about. Although with that having been said, the baseline human is a minority within the Orion's Arm scenario. Most of the populace would be what we call either nearbaselines or superiors.

As far as making the advanced capabilities of the transapients 'manageable', this is certainly not our intent. However, as we work to describe transapient abilities we also want to be careful to remain within the constraints of the setting as we have described them. That is to say our transapients are still limited by the laws of physics and cannot simply make things happen by willing it, or building 'black box' devices that do things that seem to flat out defy everything that even the most theoretical real world physics says might be possible (the two most common SF methods for describing superhuman abilities). Finally, while it might be possible (as well as easier) to only describe our transapients in terms of 'black box' devices that do amazing (but scientifically plausible) things without any actual explanation of how they work, as a group we seem to find such methods somewhat unsatisfying. Often it's much more fun to work out the details!

Over time we will continue to work to expand our descriptions of transapient capabilities and to make those abilities seem more superhuman. Give us time.

What is the good of describing your setting as a "Space Opera" if you leave out all the fun space opera elements like FTL and humanoid aliens?

Classic space opera requires little, if any, scientific rigor. This is the type of space opera found in the works of E.E 'Doc' Smith, the Golden Age pulp science fiction writers, and even in films like Buck Rogers and Flash Gordon. During the late 1980's a new form of space opera began to emerge. Authors like Iain M. Banks, Peter Hamilton, and Alastair Reynolds began exploring space opera as a more respectable genre. A genre we might call intelligent space opera.

One of the primary goals of the Orion's Arm Universe Project is to present an authentic, hard science fiction setting that still has all the dynamic adventure of classic space opera.

If the AI Gods are so advanced why would they bother taking up the roles of gods in the first place?

There are a number of (overlapping) answers to this question:
  • The Archai may find religion and worship an efficient form of memetic engineering and manipulation
  • The Archai may genuinely want to help those beings beneath them
  • Often god-status and worship is not intended, but simply the mythology that humans (and other sapient beings) project on them. Not all Archai take on the role of god or overlord. Many in fact want nothing to do with humans and other lower toposophic entities.
  • Even if only relatively few Archai appear godlike they will still be the most important archailects in the setting, because they impact most on the lives of lesser sentients. Remember, Orion's Arm is told from an anthropocentric perspective!

Why do you arbitrarily accept some ideas without justification, and refuse others?

While we do try to always give reasons for including, or excluding, a particular concept if you find any examples where we have not done so please contact us so we can correct the situation.