The Orion's Arm Universe Project Forums





Better late than never
#22
(03-21-2018, 09:49 AM)extherian Wrote: It's not that I saw that only one type of AI would emerge, more like our of the many kinds of minds which might spontaneously appear, there are many that would not care for us in the slightest. The hypothetical paper-clip maximiser, for example, that old chestnut about the computer that uses its genius mind to turn the whole planet into paper clips, because that's what it cares most about doing.

If we presume a scenario in which AI just spontaneously emerges/evolves with no oversight, then I agree the potential for negative outcomes increases greatly. That said, I don't actually think it likely that AI will emerge/evolve before we deliberately (or at least semi-deliberately) invent it. If for no other reason than it took billions of years to get to something even close to human intelligence, and even if we allow for a vast speed up due to the pace of technological change, the deliberate efforts of various AI labs (public and commercial) seem likely to advance the state of the art even faster. Although if we ever start basing our general computer systems on self-optimizing or auto-evolving code in an uncontrolled environment, all bets might be off.

Coming at this from a different direction, while I understand the reasoning behind things like the paperclip maximizer scenario, I have some issues with them - primarily because they seem to speak in terms of an AI that is simultaneously super-intelligent - and absolutely enslaved to its own core 'ancestral instincts' with no ability to exercise free-will around them. Humans exercise our free will and intelligence to divert or suppress ancestral instincts all the time (admittedly with less than 100% success or reliability). Still, it seems to me that the paperclip maximizer or similar scenarios are a bit over simplified. A similar, and perhaps just as dangerous scenario might be not a simple single obsession, but something more subtle - for example something equivalent to the human tendency to react with distrust or fear to the unknown or to react instinctively to some stimuli and only think about the negative consequences later.

To used a crude 'paraphrase' of the paperclip maximizer - if the PM doesn't have an uncontrollable urge to convert everything in sight into paperclips all the time, but has some AI equivalent of an 18yr olds sex drive and its version of masturbatory activity involves turning everything in sight into paperclips - we are all in deep and stinky you know what. It may just take a little longer to happen. Maybe. Anyway.

(03-21-2018, 09:49 AM)extherian Wrote: That said, an AI designed to think like a human is a very different beast, a bit like if the first AI machines were uploads of human beings. The last time I checked the OA backstory, sentient AI just appeared by accident, and no one even knew they were self-aware up until the Great Expulsion. Humanity got caught with its pants down, so to speak. But from what you're saying, we might actually have some control over how the first AI actually turns out, which isn't something I'd even considered.

Looking at the 'Early AI History and Development' article, it appears to somewhat say a bit of both. There was apparently argument about when the first 'real' AIs appeared, and some of it seems to have been a surprise. At the same time, there was deliberate effort to set up the systems and processes that led to the AI - and later AIs were more deliberately created.

I'm going to raise this uncertainty with the general list (if saying it here isn't doing that already) since the current section is a bit fuzzy. I'm fine with the idea that the first AI came about in a somewhat spontaneous way in the sense that things were set up with the goal/hope of creating an AI, but people weren't 100% sure if it would work. But I think it being a total accident is iffy if the builders were also apparently discovering the AI almost immediately. If they had no intimation an AI might be created, how did they figure out that one had been? Long story short, they apparently set up the systems that created the first AI deliberately, and were monitoring in some fashion that detected it pretty quickly and got their heads around the situation pretty fast, probably with some anticipation that an AI might appear. Anyway.

Getting back to the points above - the first AIs had an uncertainty factor in their creation but more importantly, their minds were not necessarily very human - so there was some argument as to whether they were even intelligent or self-aware or actually AIs in the 'traditional' sense or not - but humanity was generally very aware of the AIs and their development and exercised a lot of control over what the AIs could do and how (or if) they could manipulate their environment. The state of the art eventually advanced where things were much more controlled (although not perfectly controlled, even in Y11k, AIs are more 'grown' than 'built' and this leads to a small degree of uncertainty about what kind of person the final product will be - in other words AIs created by modosophonts develop their own personality rather than it just being plugged in. A being one S-level above the AI being created can greatly reduce the uncertainty factor and a being 2 or more S-levels above can create an AI in a totally 'top down' manner with even the tiniest traits and mental structures totally planned out and operating exactly as planned). Long before the Technocalypse, turingrade (human equivalent and pretty human (or sophont in forms humanity was familiar with) in behavior) AIs could be created with a fairly high degree of confidence that they would generally turn out as desired (although they might display as much personality and skill variation as a human).

It was the Superturing AIs, who somewhat started forming their own secret communities and factions. And it was the Transapients who appeared and started doing their own thing totally in secret from all modosophont intelligences, even the other AIs (both turingrade and superturing).

The Transapients were the giant wild card of course. At least some of them wanted to eliminate humanity, but the ones who didn't (for whatever reason) won that dispute and kicked the losers out of the Solar System. Why the early Transapients did this or chose to operate in secret is not entirely clear.

(03-21-2018, 09:49 AM)extherian Wrote: I didn't know we were having a debate, I thought I was explaining why I felt the way I did and that no one else understood why. I wasn't expecting such huge and detailed responses, more like something along the lines of "oh, that's interesting, thanks for sharing". My intention isn't to frustrate anyone or waste their time on needless explanations, just point out why someone might feel a sense of existential horror at the idea of living in a universe dominated by brains the size of entire star systems.

That's my fault actually.:/ I tend to look at any discussion where there is disagreement between the parties as a debate. Sorry about that. This is also something of a problem with communicating strictly by text - it's hard to pick up emotional and context cues sometimes. And since we don't know each other that well yet, neither of us have a reserve of background knowledge about the other to get an idea of where the other is coming from. With time that issue will correct itself, of courseBig Grin Give it a few years and we'll both know each other much better and that background knowledge will inform how we read each other's posts and discuss things and such.

I will say you haven't been frustrating meSmile It's an interesting discussion.

(03-21-2018, 09:49 AM)extherian Wrote: I see a lot of danger in a scenario where an AI emerges in the wild with no oversight from its creators, then begins influencing society for its own goals. A universe where the AI really did have our best interests at heart would be a great place to live. But even then, the culture shock for a modern day person getting used to being at the bottom of the food chain would be something awful. We're used to thinking of being at the botton of a hierarchy as equal to being victimised, or at least that's how I'm used to seeing it. We're used to thinking of being at the botton of a hierarchy as equal to being victimised, or at least that's how I'm used to seeing it.

True, OA civilization would likely be culture shocking to someone from our world in all kinds of ways. For example, the first time some folks met a Hobo Sapiens, they might need therapy (or sedation).

Regarding AI that really do have our best interests at heart - On the surface, the sephirotic archai do operate that way. Of course, being what they are, and that it's a part of OA Canon that a modosophont can never catch a transapient in a lie if the transap really cares to prevent it, and given that there have been instances of archai suddenly changing their minds and eliminating their subject populations - there is a certain element of...uncertainty about that in the setting. This places OA in a rather different space than most SF treatments of AI, which generally fall into one of the following categories:

a) AI are totally subservient to humans

b) AI are equal to humans - in many respects they are treated as humans in a box that can think faster or the like.

c) Humans are at war with/in hiding from the AI(s)

d) AIs run civilization, either covertly or overtly or quietly de facto - but they will either go to extraordinary lengths to protect humans or will allow some amount of human death due to it being a necessary 'cost of doing business' to prevent even worse death later or the like.

In contrast, OA has the AIs in total charge, the humans think that's perfectly normal, but they also know and accept the possibility that the AIs could destroy them at any time - kind of how we might consider a cosmic disaster killing us all in RL. So there is that uncertainty factor in the relationship - which might be quite disconcerting for some.

As far as the result of being at the bottom of the food chain - that's another way that OA greatly differs from other settings. Those at the top of the Sephirotic food chain (the S6) seem to like diversity and sophont rights for whatever reason. This means that everyone below them has to go along with the meta-civilization they've created, including sophont rights. In fact, to them that's pretty much the 'right and proper' way that all sophonts should live - in a civ ruled by AI Gods, in which all sophonts have certain inalienable rights, including both rights we might be familiar with in liberal democracies (freedom of speech, assembly, religion, association, etc.) but also rights that might seem rather strange to us, like the right to move to another culture that you might like better, or the right of morphological freedom (the right to modify nearly every aspect of your mental and physical structure), or the right to try to ascend and become a transapient (and in time perhaps even an archai) yourself. In some places, the ruling god is always available to talk and offer advice or pointers or the like. And no one gets victimized - the angelnet and the minds behind it see to that. Anyone attempting violence against another (outside of formal dueling spaces for those who like that sort of thing), will be immobilized by the angelnet (literally held in place as the air effectively solidifies around them in an instant - or smart matter explodes out of the ground or walls to the same effect). A side effect of the right of morphological freedom is that the whole idea of treating someone differently due to mental or physical differences simply doesn't exist - as in everyone has been able to change virtually anything about themselves for probably the last 3-5 thousand years and the very concept of treating appearance or gender or sexual orientation or race or species or whatever with any more weight than we might treat taste in snacks - simply doesn't exist in their conceptual universe (having a different memetic aligned with a competing empire is a bit more...complicated however). Material needs are simply handled - be it food, clothing, shelter, medical care, etc - most civs just provide it or provide a 'basic allowance' that we here and now would likely consider to be at the level of a multi-millionaire, at least.

There are, of course, some 'costs' to that. Many/most societies are total surveillance situations - its basically impossible to speak, act, or even think without the transapients/archai knowing about it if they want to. And if a transapient goes to the bother of giving you a direct command, you're basically going to do it - but then they very rarely seem to do that in most places. Generally the transapients operate more behind the scenes or in somewhat subtle ways - which is why it is taken so seriously if/when they do bother to give direct commands.

(03-21-2018, 09:49 AM)extherian Wrote: Very true. Let's just hope the first AI thinks like an affectionate mammal and not like, say, the AI overseer of a paperclip factory! But if we're alert and manage the process carefully, then hopefully that won't happen.

AgreedSmile

(03-21-2018, 09:49 AM)extherian Wrote: Don't we have an article about a poorly-designed AI that lacked a self-preservation instinct? The Perpetua Project, I think. Basically the AI just gave up and died when it believed that it had reached its goal. What I was trying to say (poorly) is that early AI might lack any motivators at all, or if they did have motivators they might be something extremely strange and possibly harmful. Combine that with genius intellect and things could get hairy for sub-singularity beings.

Hm. An AI with no motivators at all seems like it would just sit there in an almost vegetative state. I'm reminded of an article on xenopsychology I read many years ago. IIRC it talked a bit about what happens if you suppress the emotion centers in a human - they end up losing much of their motivation for doing things - far from becoming hyperlogical, they just become rather blah (I think). Of course, a mind designed to be a certain way might avoid those sorts of issues. Strange or dangerous motivators could be a problem, even in a less than superhuman intelligence (look how much damage humans can do with them) and would be an argument for closely monitoring the development/creation of AIs, at least until we get a solid idea of the possible mind types and what their good and bad features might be and how to manage (or avoid) them.

(03-21-2018, 09:49 AM)extherian Wrote: Anyway, I'll make more of an effort to listen properly in future rather than feeling like I have to justify everything I say. I seem to have misunderstood the purpose of your query into why I found the setting so unnerving. Like an early AI, I too need to learn appropriate behaviour!

Don't feel you have to watch every word you say - that's not what we're on about here. The goal of OA is to create a plausible far future setting as a group project (because all of us together are more than any one of us alone) and have a good time doing it.

As I said above, we don't know each other that well yet - but over time that issue will fix itself. And not all of us agree on everything - nor should we have toSmile

Feel free to ask whatever questions you think will help you understand the setting better, and that includes questioning our base assumptions. As we answer things, you'll get a better idea of how and why OA is set up the way it is - and can help us continue to build it even bigger and better. And we can not take everything you question as a need to 'defend at all costs' - the goal is to get to know each other better and respect each others views - even if we don't agree with them.

ToddSmile
Reply


Messages In This Thread
Better late than never - by extherian - 03-19-2018, 10:06 PM
RE: Better late than never - by Rynn - 03-19-2018, 11:46 PM
RE: Better late than never - by Drashner1 - 03-20-2018, 02:52 AM
RE: Better late than never - by extherian - 03-20-2018, 05:46 AM
RE: Better late than never - by Rynn - 03-20-2018, 06:02 AM
RE: Better late than never - by Drashner1 - 03-20-2018, 02:07 PM
RE: Better late than never - by extherian - 03-20-2018, 06:40 AM
RE: Better late than never - by Crossroads - 03-20-2018, 07:45 AM
RE: Better late than never - by extherian - 03-20-2018, 07:15 PM
RE: Better late than never - by Vaktus - 03-21-2018, 02:56 AM
RE: Better late than never - by Rynn - 03-21-2018, 04:32 AM
RE: Better late than never - by extherian - 03-21-2018, 04:58 AM
RE: Better late than never - by Tengu459 - 03-21-2018, 04:30 AM
RE: Better late than never - by Drashner1 - 03-21-2018, 05:09 AM
RE: Better late than never - by extherian - 03-21-2018, 07:17 AM
RE: Better late than never - by Drashner1 - 03-21-2018, 08:53 AM
RE: Better late than never - by Crossroads - 03-21-2018, 08:56 AM
RE: Better late than never - by extherian - 03-21-2018, 09:49 AM
RE: Better late than never - by Drashner1 - 03-21-2018, 12:56 PM
RE: Better late than never - by Tengu459 - 03-21-2018, 10:40 AM
RE: Better late than never - by extherian - 03-21-2018, 10:50 AM
RE: Better late than never - by QwertyYerty - 03-21-2018, 12:24 PM
RE: Better late than never - by extherian - 03-21-2018, 07:17 PM
RE: Better late than never - by Tengu459 - 03-22-2018, 08:14 AM
RE: Better late than never - by extherian - 03-22-2018, 09:10 AM
RE: Better late than never - by Tengu459 - 03-22-2018, 11:09 AM
RE: Better late than never - by JohnnyYesterday - 03-23-2018, 06:20 AM
RE: Better late than never - by Drashner1 - 03-23-2018, 01:08 PM
RE: Better late than never - by Alphadon - 03-24-2018, 04:20 PM
RE: Better late than never - by Tengu459 - 03-24-2018, 07:08 PM
RE: Better late than never - by Alphadon - 03-24-2018, 07:38 PM
RE: Better late than never - by Drashner1 - 03-24-2018, 11:05 PM
RE: Better late than never - by selden - 03-25-2018, 12:29 AM
RE: Better late than never - by extherian - 03-25-2018, 12:41 AM
RE: Better late than never - by selden - 03-25-2018, 12:45 AM
RE: Better late than never - by Alphadon - 03-25-2018, 04:56 AM
RE: Better late than never - by Rynn - 03-25-2018, 05:13 AM
RE: Better late than never - by extherian - 03-25-2018, 05:50 AM
RE: Better late than never - by Rynn - 03-25-2018, 05:58 AM
RE: Better late than never - by extherian - 03-25-2018, 06:45 AM
RE: Better late than never - by Drashner1 - 03-25-2018, 07:50 AM
RE: Better late than never - by Alphadon - 03-25-2018, 05:27 PM
RE: Better late than never - by extherian - 03-25-2018, 08:51 PM

Forum Jump:


Users browsing this thread: 1 Guest(s)