The Orion's Arm Universe Project Forums

Hi, a timeline question, and some math about encryption
Hi everyone. I've been reading Orion's Arm avidly for several years, and decided it was getting to the point where I should contribute.

I have a number of ideas for contributions; however, most of them fall into the category of "What happens next?", i.e. they're set after 10,600 A.T (some only a few centuries after, others up to a couple of millennia after). I assume this question will have come up before, and I'm guessing there's an official policy on it -- if so, where would I find that? Are people open to gradually (or at some point) pushing the date forward and writing post-10,600 material (either by revising the Encyclopedia to update it to a later data, or maybe in a separate section)? Or is there a policy that this will never happen?

On a pre-10,600 topic, what is the Orion's Arm position on P!=NP or P=NP? To add a little more detail, of the five currently-considered-plausible worlds outlined in which one is canonical for OA? If it was the Algorithmica (i.e P=NP), or the uniform Heuristica scenario as described there (i.e. P ~= NP in practically all cases that you're likely to ever care about), then transapients should be able to efficiently do even more astounding things than they are seen doing in Orion's Arm: like trivially prove anything that can be proven, or optimize anything, or trivially crack any code -- and it's unclear why they'd need astronomical amounts of computronium to do it, since under this scenario generating any proof isn't significantly harder than checking a proof, finding the optimal solution to any set of constraints isn't much harder than measuring how optimal it is, and cracking any code isn't much harder than decrypting it with the key; so I think those would break the setting. Modosophonts and apparently even low transapients in OA are still using encryption (I haven't seen anything explicit about whether this includes public key encryption or indistinguishability obfuscation), so presumably the modosophonts believe (or hope) it's the Minicrypt or Cryptomania scenarios. But the existence of Omega Keys (unless those work by timing attacks, breaking software security, messing with the opponent's pseodorandom number generation, access to some sort of PSPACE Tipler oracle based on closed timelike curves, or something else other than polynomial time cryptoanalysis) suggests they might be wrong. I'm wondering if it's actually a variant of Heuristica (not described in the paper, but I believe actually a possibility) where instances of certain NP-complete problems drawn from certain distributions can on average be solved efficiently, and (most) others can't. I believe this is possible, since the "all NP-complete problems are polynomially equivalent to each other" tricks works only on worst case running time, not on average case running time for a specified distribution, since the mappings between them will generally transform an 'average' (for some distribution) instance of one NP-complete problem into a very contrived and non-typical instance of another (i.e. to a very contrived and artifical looking distribution that only produces instances in a subset of measure tending to 0 on the problem space). So there might be some distributions over the space of instances of some NP-complete problems where a typical case can be polynomially solved by an S0, others that can only be polynomially solved by algorithms that are only comprehensible to an S1 (through an S0 could 'turn the  handle' on an algorithm that they had been given by an S1 but which made no sense to them -- this is transapientech, not clarktech), others that needed an S2, and so forth. There are thousands of different NP-complete problems already known (and doubtless many more by the OA period), each with many possible distributions across their problem space, so the landscape of what progressively became possible and what still wasn't possible at each S-level could be extremely complex. Still, if too many things become possible at S6, you get back to the problem described above with the generic Heuristica where just about everything is possible, that it gives the archailects what feels like intellectual superpowers, and I suspect would break the OA setting. So we would need that even at S6, the majority of distributions likely to come up in practical instances over the majority of NP-complete problems likely to come up in practical instances should be unsolvable.

What I'm less sure is what that possibility would mean for the existence of one-way functions as needed for encryption -- obviously if a purportedly one-way function was based on an NP-complete problem for which the average case was not solvable by an algorithm designable by or comprehensible to an S0, but was solvable by an S2-comprehensible algorithm, and if this fact (that the average case could be solved, as opposed to how to solve it) was also not provable by an S0, then a) it's not really a one-way function, but the S0 won't know this unless someone higher toposophic tells them, and b) you'd have a cryptosystem based on it that an S0 might be willing (if suitably memed) to use and no S0 could crack but an S2 could crack -- which would produce an effect a lot like that described in OA under Omega keys and Omega encryption. However, that paper references another paper by Levin that claims to describe a 'complete' one-way function -- a function that is guaranteed to be one-way of any function is (admittedly, this function seems to be a Turing machine lightly disguised as a tile-matching problem, running some mix of other problems -- I'm unclear on how it improves over 'compose all possible candidate one-way functions, and if any of them are then the result is', and I'm also not clear whether the corresponding cryptosystem would be workable, but then I'm not a computational complexity expert or a cryptographer). So that suggests that even an S0.3 can design an optimal one-way-function, uninvertible even by an S6 if anything is uninvertible by an S6. Given that, and careful automated proof checking of the relevant cryptographic propocols, and S0 might be able to build an Omega-7 code. So tricking modosophonts into using not-really-but-apparently-one-way functions might be hard.

Also, if it's true that most 'common' distributions across most 'common' NP-complete problems are on average not soluble even by an S6 (as suggested above in order to avoid the disadvantages of 'generic Heuristica'), then that suggests that stumbling at random across a one-way function strong enough for an Omega7+ code is easy, and if you compose enough apparently-to-an-S0-one-way functions in series the odds are good that at least one of them really is one-way even against an S6 (so the combination of them is). Which (in the absence of active meming that's working even on Cyberians) suggests that modosophonts should generally have Omega-7+-grade encryption systems that are unbreakable to an S6 (assuming that the key size is big enough, which is pretty easy to get right). Which suggests that the Omega keys do something otehr than just crypto-algorithm cracking in polynomial time. In which case we might as well just put OA in Minicrypt or Cryptomania.
Welcome to the forum Smile I can't comment on the crypto question at the moment (without doing some thinking about canon at least) but our policy regarding the timeline is to keep the most advanced date at 10,600a.t. We haven't completely sworn off advancing it but it would be a big decision and there would have to be a good reason to do it. Is there a particular reason why your ideas require a move in the current era date?

Feel free to join us on our discord channel:
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Hi There - Welcome to OASmile

As Rynn says, adjusting the current convention about the 'present day' of the OA timeline would be a potentially major decision that would need a good reason and some amount of discussion. While the policy around that is not necessarily 'set in magmatter and shall not be changed for any reason' (we don't really have a whole lot of that kind of thing)- it is something that we would be unlikely to change on a whim. That said, if someone (possibly yourself) were to present us with some really compelling reasons to make such a change such that our imaginations and enthusiasm were turned toward such a project we might take it on.

Alternatively, we might also look at ways to fit your ideas into the setting in ways that don't make major changes to the current set up. Not to toot our own horn, but we have a lot of experience with that kind of thing. I obviously can't make any promises at this early point in the discussion, but if you can give us some examples of the sort of things you have in mind, we can certainly give them some consideration and discussion, whether in terms of 'fitting them in with a few minor adjustments' or even adjusting the OA dating itself.

Regarding your question/thoughts on encryption - I have some (very much not an expert) thoughts on this, but don't have time to go into them now (taking a quick break from work atm). In a nutshell, I can say the following:

a) Historically OA hasn't done much with the idea of encryption for the simple reason that it is not an area that any of our members have ever indicated they know much about (for all I know we have dozens of professional encryption experts among the membership, but if so they aren't talkingTongue). If the area of encryption is of interest to you and something you would like to contribute to the setting, we would be very positively interested in that.

b) If, as part of such contributions, it became necessary to retcon the article on the Omega Key, that is quite doable (speaking as both one of 'the management' and the co-author of the article here).

c) Regarding the details of what kind of setting OA is (or might want to be) re the encryption scenarios you mention, I don't have enough info/knowledge/time to say atm, but will aim to look at this in more depth and bounce it up against our current Canon (and probably ask you some questions to help me wrap my head around this before we're done). Off the top of my head, we have one story - Dragon's Teeth by Adam Getchell that refers to an encryption device of sorts toward the end of the story. It is described as something that even the S6 cannot break - at least in this universe. If you'd like to give it a read thru - either the whole story or just the latter sections - to see how this might fit with the scenarios you present - that might be a start toward a discussion on our 'current state' in this area.

d) Re the archai being able to do even more superhuman things with certain types of encryption scenarios/abilities - Do tell us more - we are always interested in making the archai even more superhuman in a real world/real science based way. Big Grin

If you have any questions or concerns regarding any of the above, please don't hesitate to post them. Looking forward to what sounds like it could be some very interesting and productive future discussions.

Hope this helps and once again: Welcome to OA!

A limitation of any encryption scheme is that it assumes that whatever is being encrypted isn't available through some other pathway. Unfortunately for lesser intelligences in OA, high transophics can directly access their thoughts via DNI or remote sensing and indirectly by creating simulations of them. However, I suspect that various scenarios can be constructed in which those workarounds would be extremely difficult.
There are a couple of problems with the Algorithmica (i.e. P=NP) scenario from an OA point of view:

1) While it's not outside the bounds of things thought possible by some contemporary scientists, last time a survey was done the best guess of ~95% of mathematicians working in the area was that P < NP (this is for the worst case behavior, i.e. for Algorithmica -- the average case question i.e. Heuristica is much more of an open question).

2) The things that Algorithmica P=NP would let you do, if it were true, are pretty amazing, bordering on implausible to the average reader. It makes things very easy that most people would intuitively expect to be hard even for a transapient. It means that there is a single key algorithm that gives anyone with access to it a general way to save any case of some particular NP-complete problem (let's say it's the Traveling Salesman problem) in an amount of time that increases only polynomially with the size N of that case (for that problem, the number of  cities). While 'polynomially' doesn't rule out the possibility that it would scale as N^100, in our experience so far most polynomial algorithms (once well optimized) tend to run as small powers like N, N^2, or N^3 -- which would mean that a cluster-brain could use it for astronomically large N, and even a computer run by a modosophont could handle N in the thousands or millions, so usefully large problems. Combined with simple methods for each other NP-complete problem for converting a case of that into a case of the Traveling Salesman problem that only increases N polynomially (generally already known now in 2018), that lets anyone with access to this key algorithm have the intellectual superpowers described above. Even if we assume that discovering or understanding this algorithm requires some specific level of transapience (say S3), it's very hard to explain why it couldn't just be turned into a library runnable on a modosophont's computer, say as part of their exoself -- just running the algorithm is going to involve pressing "go", not intricate translogic. I.e. it's ultratech, not transapientech or clarketech. Any modosophont with access to this then had no need to, for example, do mathematics: just wire the key algorithm up to simple automated formal proof checker that can distinguish a correct formal proof from an incorrect one (tech we have now), encode the structure of the mathematical theorem you want to prove and a desired proof size as an algorithm (basic programming work), convert that to an instance of the Traveling Salesman problem using a compiler (again, tech we have now), run the algorithm, and if any proof of the theorem no longer than the maximum size you asked for is possible, then the key algorithm will spit it out. If not (whether because your theorem is false, or because the shortest poof is longer, it won't -- and it won't tell you which): so if that happens, try again with a bigger maximum size and run the key algorithm longer or on more computronium, or try asking for a disproof of the theorem instead.

So most intellectual endeavors that people expect to require creativity and deep thought -- mathematics, engineering, logistics, basically anything where, given a solution, recognizing that it is in fact a solution could be automated -- then turn into a matter of creating an algorithmic description of your problem and how to recognize a solution if you;re handed one, recoding that algorithm as an (highly contrived) instance of the Traveling Salesman (using a compiler), then handing that to the key algorithm and then leaving it tio run until it spits out an answer, or fails to. No ingenuity or deep thought required -- if you can fully describe a problem, solving it becomes trivial, even for a modosophont.

That doesn't sound like OA to me: as ultratech goes, it's too good, and it makes the vast brooding hyperbrains look, well, redundant, at least as long as the key algorithm scales as N or N^2 or N^3. Or, if you decide "the key algorithm exists, but it's polynomial with O(N^10)" -- which is hard to justify -- then maybe modosophonts with access to some fast computronium can use it use it for N=10, S1's for N=33, and S6's can use it for N=10,000 -- in which case it's an amusing little toy with no practical application to any real-world problem of any significant size unless you're an archailect, or at least have access to an archailect-sized bank of computronium. (I'm assuming here that the key algorithm can be parallelized -- generally not the case for algorithms with high polynomial powers like O(N^10) -- otherwise what matters isn't how much computronium you have but how fast a small package of it runs --  assuming the algorithm requires say O(N^10) time but no worse than say O(N^2) space. If it also scales as O(N^10) in space you get much the same results as for parellelizing it, only now it's slow even on magmatter-based hardware.)

About the closest I can get to an OA feel is if you say that the running time or space requirement of the key algorithm is around O(N^5) -- so a modosophont can use it up to about N=100, an S1 up to maybe N=1,000 (still a bit short for most proofs -- they typically have more than a thousand symbols in them), and an S6 up to 100,000,000 (more than adequate for most proofs). However, this still leaves another problem from an OA point of view, since it doesn't suffer from toposophic barriers: if you're a modosophont with access to Von Neumann nanotech (i.e if you're not a prim or a ludd), then you can easily turn any unused gas giant into a simply organized bank of computronium (with no more sophisticated internal structure than a modosophont computer) and run the key algorithm just as well as an S4 can. You don't need to be an S3+ to have any hope of designing a J-brain usable for this algorithm.

So not only does P=NP automate intellectual creativity in a way that feels unintuitive to most SF readers, it does so in a way that manifestly doesn't suffer from toposophic barriers and would obviously be prebuilt into every superbright's exoself -- you end up with the fire of the gods available to everyone in convenient 2 litre bottles.

So I think we can rule out Arithmetica -- it makes for implausible stories that don't have an OA feel (the only SF author I know who explicitly assumes P=NP is Charles Stross in his Laundry Files novels, which really are indistinguishable from magic, and I'm not sure even he has fully thought through the consequences). That leaves Heuristica -- which as Implagliazzo describes it is almost as bad as Algorithmica, but as outlined above I believe can be saved -- or Pessiland, which is just dull, or Minicrypt or Cryptomania. Most modern readers are going to implicitly assume Cryptomania: public key encryption works, and if it were done correctly could actually keep out archailects if only they didn't have mechanical telepathy and thus didn't need to break your code to know what you are doing.

Would people like me to write this up as an attempted Encylopedia Galactica article? The most OA solution would probably be "The archailects assure us it's Cryptomania, but unfortunately the proof that P < NP for average case problems over realistic distributions for at least the majority of NP-complete problems is apparently comprehensible only to an S2, for some cases an S3, so we have to take their word for it, and these Omega Key things make us dubious, unless they're really just a package holding a tiny comm-wormhole to an S5 using technotelepathy"
Circling back to the "What Happens Next" material I've been working on, I'll post one or two sample/summary chunks of it in the encyclopedia forum for people to look at/shoot full of holes, but the basic theme is "What happens when the Archailects try to ascend, or transaturation transcend, to S7 and hit the Great Toposophic Filter?" (which I'm assuming is real, and very hard to pass while staying in this reality but easy by transaturation transcention). Briefly, The Transcend transaturation transcends, Keter transaturation transcends (repeatedly) with a leftbehind (mostly an elaborate set of S0 to S5? Transcention Mazes, plus small leftbehind splinter factions that eventually noetically reform into a Post-Keter), Cyberia tries for S7, fails, goes hyperbolic Denebola Collapse and is torn apart (which gets very messy -- war-in-heaven-with-metric-weapons-level messy -- and turns out to be the actual backstory to the Chaos), the Invisible Hand tries it and goes into accelerated growth and economic instability, outcome currently unclear but prognosis dubious, while the Negentropists, Solar Dominion, Terran Federation, and others cautiously just stop growing at S6 until someone else manages it, the Zoefic Biopolity pauses at S6 until she can create an organic S5 (which may take a while), while the Caretaker Gods, the Eternal, and the Utopia Sphere each quietly go deliberate parabolic Denebola Collapse (i.e. split into multiple dividuals) but no-one can tell the difference. I'm still working on whether anyone manages S7 without transaturation transcention (in my personal opinion the most plausible candidates for a central role in this are the FAS and the Seams -- I can discuss my reasoning). Currently I'm leaning towards: "S7 without transaturation transcention is (barely) possible (see the Leviathen), but it requires making a group mind or tribemind out of at least several dozen different archailects all with different noetics and all of which have sufficiently good reasons not to just transaturation transcend right out of our reality (which rules out at least half the current archailects) without just becoming static (which rules out several more) -- so it's not happening in the next few millennia, or until the Terragen Bubble has grown a lot more -- and it also requires access to the sort of bizarre xenoremnants and weird old artifacts and anomalies left by past transaturation transcentions that our galaxy has in abundance but no (safe) single-galaxy basement universe would have (because reasons), so the archailects can't easily arrange for it to happen inside a Tipler oracle and watch from a safe distance" -- I'd really love to discuss this. If this was the case, the 10,500 to maybe 14,000 period is going to be interesting times, even worse than the 5,000s were -- the Oracle War was just the start of the end for the Pax Archailecta.

Another theme in it is "archailects start letting modosophonts through the connecting wormholes of some of their basement universes" -- starting with the MPA, of course, who can't wait to show off all the nifty pocket-universe-sized living spaces it can build now that it's S6. Which gives a view of what one version of a modosophont-friendly transaturation ascension would look like "from the inside".
Today has turned out to be busy and unexpectedly tiring. Have some thoughts re both encryption and your other ideas, but going to head to bed now as starting to doze off.

Will aim to post in the next day or so.



EDIT: Actually, a couple of quick questions that had come to mind and that I can post quickly regarding the issue of encryption:

1) How does the use of quantum computing - particularly high powered quantum computing - impact the encryption issues you raise?

2) How does the use of quantum encryption impact the encryption issues you raise?

Are these two (currently nascent or theoretical) technologies already addressed (apologies if they are mentioned in the papers you cite, haven't had a chance to look at them yet) or would they have the effect of blowing all of this out of the water? Or not impacting it at all?


Welcome Roger! I can see you have thought a lot about that subject, which is cool. I know next to nothing about cryptography, so unfortunately I cannot be of much help there.

This question is for the admins: Is there a place in OA for either a story or an EG article about virch(es) depicting the Terragen Sphere's future? Such an entry would no doubt have a line stating that this is merely one/a few of thousands or more simulated possible futures.

I'm not sure if I even like that idea necessarily, but I thought I would throw it out there as something to discuss.
(05-15-2018, 01:09 PM)Drashner1 Wrote: Today has turned out to be busy and unexpectedly tiring. Have some thoughts re both encryption and your other ideas, but going to head to bed now as starting to doze off.

Will aim to post in the next day or so.



EDIT: Actually, a couple of quick questions that had come to mind and that I can post quickly regarding the issue of encryption:

1) How does the use of quantum computing - particularly high powered quantum computing - impact the encryption issues you raise?

2) How does the use of quantum encryption impact the encryption issues you raise?

Are these two (currently nascent or theoretical) technologies already addressed (apologies if they are mentioned in the papers you cite, haven't had a chance to look at them yet) or would they have the effect of blowing all of this out of the water? Or not impacting it at all?



Neither -- it's more that quantum computers add some complexities and twists, while keeping the main structure intact. P =? NP has a quantum equivalent version, which for historical reasons is called MQP =? QMA. If P = NP, then I think it's extremely likely that MQP = QMA -- and similarly if MQP = QMA then I think it's extremely likely that P = NP (though proving either of these might be hard). It definitely is the case that P < MQP and NP < QMA -- a quantum computer can always act as a classical computer and run a classical algorithm if it wants to, so it's always at least as strong as its classical equivalent, and (even if P = NP, i.e. even if the classical computer is actually far stronger than we think) there provably are some things a quantum computer can do exponentially faster. The set of possibilities for average cases, one-way functions, and trapdoor functions in the quantum computing case is likely to similar to what the paper I linked to outlined for the classical case, but there could be more wrinkles in the details, and the proofs are likely to be even harder (at least to a non quantum intellect).

While most currently known forms of public-key encryption are trivially breakable on a quantum computer, that's thought to be due to rather special properties of the widely used public-key algorithms -- specifically, that they are based on the difficulty either of factoring or Abelian subgroup finding, both of which unfortunately for the codemakers are in the rather small set of problems where MQP (i.e. problems only polynomially difficult on a quantum computer) is (assuming P != NP) exponentially stronger than P (i.e. the same for a classical computer). [The difference mostly boils down to "quantum computers can do the Fourier transform to any accuracy in constant time, unlike classical computers which take O(n^2) in the desired accuracy" -- so if you have a problem where doing a Fourier Transform once or repeatedly is useful, this provides a massive speedup, and if you don't, it doesn't. Between that and Grover's algorithm (a much more widely usable trick, but only giving an O(n) to O(sqrt(n)) speedup), quantum computers are almost a two-trick pony -- a lot of the quantum speedup algorithms are some sort of mutated version of one, the other, or both of those.] There are other, not-yet widely used public key algorithms that don't suffer from this problem since they dont involve factoring or Abelian subgroup finding (they tend to be a good deal slower and/or have much bigger key/certificate sizes, and are newer so their security is less well understood) and so are thought not to be significantly easier to attack on a quantum computer than on a classical one (other than the n -> sqrt(n) speedup from Grover's algorithm, which basically just requires you to double your key or hash length to counterballance it -- not a big deal). The same is believed to be the case for almost all private-key encryption (since none of that is dependent on factoring) -- assuming that P != NP, i.e. assuming that "encryption" is a valid concept.

[Interestingly, if you switch from Quantum Mechanics to a Hidden Variables Theory and allow gates in your computer to peek at the Hidden Variables (i.e, it's a Not-Entirely-Hidden Variables Theory, and your computronium can make use of this) then this gives an even stronger version of computation tan a  quantum computer, which lets you crack not just factoring and Abelian subgroup finding, but also at least most of the other problems that people have figured out how to base public key encryption on -- so if that were actually physically true it might turn out to rule out the Cryptomania scenario. So again, that's an example of the sort of level of change to the physics of computation that would definitely alter the allowed set of Toposophics, for the better -- but it's a lot more drastic than just rearranging the Standard Model or tweaking the values of a few constants. Also there are a lot bigger problems with Hidden Variable Theories than that -- they pretty much explode as soon as you try to combine them with Special Relativity.]

Quantum encryption is an entirely different thing -- it's not breakable by cryptanalysis, no matter how strong, and it's protected by a combination of the structure of quantum mechanics and the sensitivity of whatever eavesdropping detection you have built into it (which needs to be single-quantum-sensitive, though there have been badly-engineered laboratory attempts to implement it where that failed -- at least if deliberately dazzled -- so it wasn't secure). Probably the closest classical encryption equivalent would be a one-time-pad (though the logic of why each is uncrackable no matter what computational resources you have is rather different)
I have seen little-or-no discussion of quantum computing in OA -- no discussion of S1 toposophics requiring building quantum-algorithm speedups of learning, decision, and search algorithms into your mental structure, for example. In fact, from everything I can recall reading on OA, it has sounded like everything up to an S6 is a boring old classical computer (implausible as that sounds), just a very big one with a very sophisticated architecture. In OA, could one have a sophont thoroughly-quantum computer that was "merely superbright", or would any (sensible) design of sophont quantum computer be S1? Conversely, could you have a "purely classical" S1, or would all designs of SI need to have access to quantum speedups (which are really hard to do in biotech, and impossible in unmodified neural tissue)?

Given that there exist S1 tribeminds made up of modosophonts (and their exoselves, which admittedly could have a little quantum computing power in them), and apparently-all-biological minds up to at least S3 (which admittedly could have quantum implants), I'm guessing that at S0 through maybe S2 or even S3 it's possible to be entirely non-quantum, or at least no more than "I have a quantum calculator/listsearcher/optimizer add-on or accelerator card in my exoself that I use to factor large numbers and search large datasets quickly". But I'm guessing that there's a level somewhere around S3 or S4 or so that simply can't be managed without throwing in at least a significant amount of quantum speedups scattered through your mentality (if we're willing to retconn the Silk God entry a bit), and from there on up, if you're trying to build an all-biological archailect, it better either have quantum computronium mental implants or some sort of biosynthesized computronium capable of doing quantum computing. (Of course, Roger Penrose thinks the human brain is already using quantum computing, but a large proportion of other scientists think this idea is crazy and he should stick to tilings and twistors -- but maybe if you were an archailect you could design biotech that actually did have proteins or small organic chemicals that had well-isolated qbits in them and that could do precise quantum interactions on them to perform quantum computing. Speaking as a physicist, it sounds like an awfully warm wet messy liquid environment for that sort of work, but then that's generally true of biotech. Failing that, you could almost certainly come up with solid-state quantum computronium that could be internally biosynthesized by sufficiently advanced biotech. Or there are synbont or cyborg solutions. So something that looks on the outside like a godwhale might still have some internal quantum computronium.

I'd probably identify about four levels of quantum integration in a sophont:
0) I'm completely classical
1) I have a quantum calculator/coprocessor/addon that I can run non-sophont quantum algorithms in (or if I'm an S2+ transapient, probably I can run sophonts of no more than about 1/3 of my toposophic level in it that are themselves quantum-level-3)) on demand (its I/O channel to me is classical, but I can probably plug it into an external quantum channel and use it for quantum encrypted messaging) -- I can also use it as a source of quantum randomness rather than having to flip a coin, so if I want to baroquify my actions I can arrange to be truly unpredicatable
2) in addition to 1), while my overall thought processes are still mostly classical, I have a lot of learning/search/decision/optimization processes scattered all through my mental structure, at both high and low levels, that locally use quantum algorithms to achieve significant speedups wherever that's practical, and I have some modules in my mental structure that are specifically designed for running certain useful quantum algorithms in ways that are integrated into (and can even be wrapped around certain small regions of) my otherwise classical thoughts. I can probably factor products of primes in my head as nearly as easily as I could multiply them.
3) my entire mind runs on quantum computronium that's error-corrected, reversible, and nigh-completely-isolated from environmental decoherence. I often run myself inside Grover's algorithm so I can (in effect) explore 1,000,000 different trains of thought in only 1,000 times the time that a single train of thought would take me, and I can perfectly unthink a train of thought back to its initial state with neligible loss of quantum coherence. My mental processes normally emit no heat until I'm done with them (because they're completely reversible until I choose to dump all the ancilliary qbits as heat). To the extent that I can change my own thought patterns I can use any quantum algorithms I like in any level of my mind. I regularly quantum-teleport quantum portions of my internal state from one place to another in my head, and I precache entanglement in advance for doing so. I can play the role of a "a physicist who's not an observer because he's not a classical system" in David Deutsch's two-slit disproof of the "observations by minds collapse the wave function" version of the Copenhagen interpretation, or (given a suitable quantum information channel, or a classical channel and a big-enough supply of pre-shared entanglement) I can act as a subroutine or oracle in a larger quantum algorithm, or as node in a quantum-level-3) group mind. Good luck with using technotelepathy on me unless you are (or have a specialist node that's) both quantum-level-3) and at least a toposophic level higher then me (which is challenging if I'm an SI:3) -- and even then by the laws of quantum mechanics you can't clone my internal state. Quantum mechanical processes and systems seem completely intuitive to me: I can visualize chemistry or understand quantum circuit diagrams intuitively. I live and breath the Feynman sum-over-histories -- and I can compute it in my head. Basically you can't get more quantum than me!

In theory you could have a being of whom almost all the statements in 3) were true but not those in 2) (such as an uploaded human running as a classical process on a large quantum computer, who wasn't in a position to rewrite his internal mental structure to take advantage of quantum algorithms for learning etc), so 2) and 3) are actually independent variables, but in practice I don't think people would often deliberately engineer a transapient for which 3) was true but 2) wasn't -- it's wasteful design once you built and carefully isolated/chilled down all that fancy quantum computronium.

I'm not sure 3) is even practicable for anything as big as an entire S6 or S5, probably even an S4 -- how do you isolate the internal state of an entire cluster, star, or planet from the rest of the universe to the level where not one quantum of relevant uncorrectable information escapes in a significant period? Topological quantum encodings and multi-error-qubit quantum error correction sound like good starts, but still not up to error rates of, say, less than 1 uncorrectable qubit error per gas giant per hour. So I'm guessing the answer is that all archailects are no better (and generally no worse) than 2), since they're just to big for 3) to be practicable. One exception to that is a Tipler oracle -- if your only connection to it is a single comm-gauge wiormhiole, it's proabably not too hard to isolate the system. So an S6 can probably run a quantum-level-3) S5 inside a Tipler oracle.

Lower toposophic level transapients may be a low as quantum-level-1) (or even 0) in odd cases, like purely biological ones or group-minds-on-top-of-purely biological) or as high as 3) (which is kind of a stunt, though an impressive one -- sort of comparable to transavancy: a 3) would be an-often useful addition to a tribemind, or useful as a specialist node in a small archailect), and modosophonts can be any of these (though biological ones are generally 0) or 1))

Sorry to mess up the nice simplicity of the toposophic levels, but this suggests terminology like SI:1(q2) -- bear in mind here that the second number says a lot less than the first about who would win any contest that wasn't based on factoring large numbers or performing Fourier transforms in your head: it's more like a fraction compared to a toposophic level, roughly comparable in importance to the difference between a slow god and a fast god. A more complex notations would be SI:4(q2,SI:2q3) -- which means I'm a regular SI:4, with quantum speedups sprinkled all through as you'd expect, and I have an SI:2 specialist subnode that's running as a virtual inside a huge cube of quantum computronium, just in case I ever need to explore O(10^8) different paths of SI:2 translogic in time O(10^4), or grok in its entirety the quantum evolution of a quantum-interacting system with as much details an SI:2 can keep mental track of -- say every electron, proton, and neutron in a cell).

Forum Jump:

Users browsing this thread: 1 Guest(s)