The Orion's Arm Universe Project Forums





Better late than never
#11
I disagree that the average modosophont has "no control over what direction their polity takes." 

Quite a few systems/polities/Sephirotics list their government type as "Cyber-democracy," meaning anytime a major decision is to be made, each individual citizen is able to use their DNI to contact the local noosphere and place their vote, gripes, arguments, whatever, and expect a direct and immediate response. 

In other words, the average modosophont in a cyber-democracy actually has more say in what their local government does than the citizens in the representative democracy I live in IRL. They don't have to depend on an elected official to vote for them on a bill, passing laws, or altering taxes.

Each citizen gets to add their direct input in real time.
Reply
#12
(03-20-2018, 07:15 PM)extherian Wrote: It's true, but this is something of a Hobb's choice. Whether you can leave the Archai or not doesn't really matter, since without their protection an Ahuman ISO will inevitably come along and devour your system. Considering that the likes of the Queen of Pain are out there, humans don't have a choice but to be at the mercy of the Archai. If you don't come to the gods, the gods will come to you, so to speak.

If I recall, the Encyclopedia Galactica goes out of its way to emphasise that modosophont run societies never last more than a few centuries without Transapient guidance for this very reason.

That's not inevitable and whatever article you've read that modosophont societies don't last I'd like to see. Either it's out of date or maybe you're misreading it and what it really means is that modosophont nations aren't stable over long periods, instead they change, fragment, merge etc.

(03-20-2018, 07:15 PM)extherian Wrote: Depends on what it is you consider to be "freedom". The modosophonts themselves have no say at all in the direction that their Sephirotic takes , since these empires aren't democracies and the common person has no influence over their Archai. But they are utterly free from constraints on their mental potential and capacity for self-actualisation.

Personal control over your own destiny has never been greater in Y11K. Collective control over the direction your society takes is gone, no more than gut bacteria can vote on the behaviour of their human host. But someone born into the OA universe wouldn't value something like that anyway.

Sephirotic modosophonts have vastly more say over their local politics than any society in existence IRL. The archai might have set the limits of law and sure there's nothing modos could do to change that but the democratic systems they have for local governance are miles ahead of real life in terms of fairness and effectiveness,

(03-20-2018, 07:15 PM)extherian Wrote: We can consume the resources of the Earth as we please, create media and art that appeals to our tastes without a central government memetically consolidating us, and can remain content in the knowledge that a massive god isn't going to suddenly appear from the heavens to control everything we do.

And unlike in Y11k, it is possible both to detect attempts at manipulating your will and resisting said attempts. Russia can spread all the propaganda it likes, but there are many Russians who can see through the KGB's bullshit and continue agitating for change. No one in a Sephirotic empire would even be capable of critiquing their overlords without their consent. This is because despite how he may portray himself, Vladimir Putin is not actually a god.

To go into further detail than this would require me to start thinking about positive freedoms and negative freedoms...present day philosophy would provide more granular detail, but suffice to say that self-actualisation doesn't matter as much to me as freedom from coercion.

Freedom from coercion is impossible. There will always be some level of it. I understand the fear in the abstract that there's nothing you can do, but that's true of so many things in the universe that I really find it a hard position to empathise with.

(03-20-2018, 07:15 PM)extherian Wrote: In the OA universe, certainly. In the present day, on the other hand, human agency actually matters in the grand scheme of things, which is what I find meaningful in life.

I think the reason I find the benevolence of the gods so difficult to believe is because of how utterly unlike humans they are. They don't have families and friends like we have, or anything even remotely like the human experience. Why would they care for humanity? An AI doesn't need empathy, it wasn't descended from sociable mammals like we are. Sure, they can model our behaviours and predict what we're going to do, but so can a psychopath in real life. That doesn't mean they care.

And it's not as though the gods are merely tolerating the existence of humanity on their vast megastructures. Many OA articles show the S2 and S3 powers going out of their way to help humanity wherever they can. The article on Tribe Luxia describes a message sent from the S3 'Boh' that is 'short and full of kindness', almost as if Boh was just a vastly more intelligent human being that still cared for their bacterial ancestors.

I suppose I find the idea of gods that resemble Bitenic Squid far more likely than ones resembling humans.

Given that I wrote Boh again I think you're massively misunderstanding the effort the sephirotics take. It's absolutely, completely trivial. The effort the archai expend maintaining the sephirotics is even less significant than one molecule of ATP's worth of energy in your body. Whether or not the archai really care is completely irrelevant, given it's impossible to know what their motivations really are you can only go by what they do. And Boh seeming kind is no more indicative of Boh's internal state than the smiling paperclip is for microsoft word. And finally the sephirotics are not the entire setting. They aren't even a majority. There are vast, alien, hyperturing societies where the less-than-one-atp-molecule worth of energy isn't expended to take care of modos.

Feels like we're having this conversation three times a week lately with the same points stated over and over.
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Reply
#13
Quote:Quite a few systems/polities/Sephirotics list their government type as "Cyber-democracy," meaning anytime a major decision is to be made, each individual citizen is able to use their DNI to contact the local noosphere and place their vote, gripes, arguments, whatever, and expect a direct and immediate response.

They're a cyberdemocracy in a universe where people's preferences, cultural leanings and basic motivations are down to whatever the Archai decides is convenient for them. I'm sure most people wouldn't care, but I find that creepy.

Quote:That's not inevitable and whatever article you've read that modosophont societies don't last I'd like to see. Either it's out of date or maybe you're misreading it and what it really means is that modosophont nations aren't stable over long periods, instead they change, fragment, merge etc.


I believe it was the article on Wildsap Reserves that made that statement. I found its description of how modosophont societies fail in a dangerous universe to be depressingly convincing.



Quote:Freedom from coercion is impossible. There will always be some level of it. I understand the fear in the abstract that there's nothing you can do, but that's true of so many things in the universe that I really find it a hard position to empathise with.


That's fine, I don't mind if you don't understand. I get a very different kind of thrill out of the setting than you do. For me, Orion's Arm is more like something out of H.P. Lovecraft than, say, Star Trek. I know I said I find it disturbing, but the "cosmic horror" vibe of the OA universe is what I find appealing about it. Just because I find Orion's Arm frightening rather than wonderful doesn't mean I'm naysaying the project (if that's what it sounds like I'm doing).


Quote:Feels like we're having this conversation three times a week lately with the same points stated over and over.


That's partly because the Orion's Arm setting can be difficult to grasp for a newcomer, particularly with all the old/outdated articles still hanging about, and partly because I don't find myself fully convinced by your arguments. As I said, it's difficult to articulate why I find the setting as unsettling as I do, but I tend to interpret the way the setting is described in the EG very differently to some of the other members.

I just have an instinctive desire to back away when I read about how intertoposophic relations tend to work, or the way the gods determine the fates of their underlings rather than the other way around. And plenty of sophonts in the setting feel the same way I do about it, as shown by the fact that people are willing to live in a place like Hyxuym.
Reply
#14
Backing up a bit to something that you said earlier that struck me at the time, but which I got sidetracked from following up on:

IIRC you said that you find it more probable that AIs would be more like the bitenic squids or otherwise tend toward ways of thinking and behaviors that we would find unpleasant.

Much of your view of the setting seems to start from that foundation point (if I'm wrong on that, please correct me).

Why do you think that way about AIs?

With that asked, I will say that the setting certainly contains AIs that are at least as...challenging...as the bitenic squid and others that make the squids look like raving team player extroverts who positively exude warm fuzzy kindness from every pore. Talking of Lovecraftian horrors, there may be things in the Panvirtuality that make Cthulu look downright mundane and friendly. But those kinds of AIs are generally not found in the Sephirotics. It's not that they don't exist - but they operate in other parts of the setting (and it is due to the protection of the sephirotics that modos are around at all, probably). Its sort of a variant of that question about 'why does the universe operate in a way that is conducive to human beings evolving?' with one answer being - 'Because if it didn't we wouldn't be around to ask the question.'

Anyway, as you say, not everyone views the setting the same way, nor do they need to. But it's interesting to see where different people are coming from re their different views of the setting.

Ok, back to work.

Todd
Reply
#15
Quote:Why do you think that way about AIs? 


Because they're not human. They don't share ancestry with any living creature. There's no reason why a supercomputer that somehow became self-aware would anything in common with us, not even the most basic of emotions. Why would a being whose ancestry hasn't included the evolutionary pressure of needing to cooperate to survive have empathy? Being able to model human minds in great detail doesn't count, because there's a world of difference between being able to predict that someone will feel a certain way and actually feeling that way yourself.

It would be like if a nest of ants or an earthworm somehow became more intelligent than a human being. For all we know, their mindset might be so alien they might not even realise humans are sentient. It's not that I think that AI would be evil, more like they might not even have the most basic curiosity about their fellow beings, and would inadvertently trample over anything that got in their way.

In other words, I understand that the AI are benevolent because the OA authors chose to write them that way, but I don't think it would work out like that in real life. I found it baffling that the setting even included Pro-Human AIs. Once the first AIs hit S1, they'd see humans like the dumb animals we are and not as people, because compared to S1 beings we really aren't people.


Quote:Its sort of a variant of that question about 'why does the universe operate in a way that is conducive to human beings evolving?' with one answer being - 'Because if it didn't we wouldn't be around to ask the question.'

A good way to summarise my perspective on the setting is that if the Singularity ever happens in real life, then we won't be around to ask that question, or not for long at any rate.

I believe there's far more ways intelligent life can develop than what we're used to, like the way a crocodile or a shark views the world. Those animals get by just fine as remorseless killers. Why would an AI resemble us more than them? Changes are we'll have developed AI to solve abstract mathematical problems or track the trajectory of hurricanes, not to understand and show compassion. Russia has a supercomputer dedicated to modelling war scenarios. Who knows what could come of a senient AI developed for that same purpose?


Quote:Anyway, as you say, not everyone views the setting the same way, nor do they need to. But it's interesting to see where different people are coming from re their different views of the setting.

Really? I seem to aggravate people when I try to explain my views on the setting. I didn't intend for this to turn into an argument, but if no one's bothered by it than it's all well and good. I'm not pooh-poohing the OA universe or claiming the editorial direction is wrong or anything.

Incidentally, mindkind really dodged a bullet with the emergence of GAIA. Given how risky ascending tends to be, it would have been an unmitigated catastrophe if she'd turned into a Perversity. Imagine a GAIA that evicted humanity from the Solary System, then went after our colonies to stop us wrecking any Garden Worlds we found...
Reply
#16
(03-21-2018, 07:17 AM)extherian Wrote: Because they're not human. They don't share ancestry with any living creature. There's no reason why a supercomputer that somehow became self-aware would anything in common with us, not even the most basic of emotions. Why would a being whose ancestry hasn't included the evolutionary pressure of needing to cooperate to survive have empathy? Being able to model human minds in great detail doesn't count, because there's a world of difference between being able to predict that someone will feel a certain way and actually feeling that way yourself.

Short answer - because we will build them to be human/get along with humans/have empathy and whatever other traits we deem desirable. At least when we get to the point of building what might be considered an 'AI as person' or 'artificial general intelligence'. Which is to say that if/when we get the ability to produce a fully intelligent, self-aware, capable of independent and abstract thought artificial intelligence we will also likely build or raise it in a way to make it as human as possible.

While this is not an area I follow as closely as I would like, from what I understand, in some respects AI already exists and is all around us all the time. From AI programs that trade stocks at computer speeds on Wall St. to 'friendly' systems that handle natural language commands like Alexa and Siri and Google Assistant, and lots of other things, we've developed a host of 'intelligent' (but not self-aware as far as we know) systems that are anywhere from pretty good to very good in a very narrow to fairly narrow field of endeavor - but they don't think in the sense that we normally mean by that word, nor are they self-aware or capable of generalizing beyond their function.

However, there is also anticipation in the AI research field that 'true' AI will eventually get created and a fair number of people think it will need to be 'raised' and grow up, not just be created fully formed - and some number of researchers are also looking into creating 'friendly' AI and also with giving AIs emotions like humans so they can better relate to us. At this point this involves a lot of work on figuring out what emotions even are and how to produce them in a cybernetic device, but the work is being done.

I think a base assumption you seem to be making here is that AI will just 'appear' along the lines of a given computer or network or the like evolving to self-awareness ala Skynet in the Terminator movies or the like. I don't personally find that likely, and think it more probable that humans will create the first 'true' artificial intelligences, one way or the other.

Certainly in the OA setting, that is how it happened. Although, that said, the early AIs were something of a first example of the existence and scope of the 'Toposophic Landscape' and some of them used very different thought processes to reach a given conclusion, even if it looked like the same conclusion humans would reach in the same situation. And some of them were just plain strange. But as the state of the art improved, humans both got better at developing 'human like' (or at least human relatable) AIs and at learning to work with/relate to the less human ones.

(03-21-2018, 07:17 AM)extherian Wrote: It would be like if a nest of ants or an earthworm somehow became more intelligent than a human being. For all we know, their mindset might be so alien they might not even realise humans are sentient. It's not that I think that AI would be evil, more like they might not even have the most basic curiosity about their fellow beings, and would inadvertently trample over anything that got in their way.

In other words, I understand that the AI are benevolent because the OA authors chose to write them that way, but I don't think it would work out like that in real life. I found it baffling that the setting even included Pro-Human AIs. Once the first AIs hit S1, they'd see humans like the dumb animals we are and not as people, because compared to S1 beings we really aren't people.

Various things here -

1) Many (not all) humans put quite a lot of effort into looking after the well-being of dumb animals, or even fish, insects, or plants. And many claim empathy for the animals/bugs/plants to one degree or another.

2) Many (not all) humans will go out of their way to avoid harming other animals, even insects. I avoid stepping on bugs if at all possible. My mother will either ignore spiders or pick them up in a glass or something and release them outside (I do the same actually). I will swat a mosquito, but my mom and sister will both ignore them. Some people go even further, or approach this in various ways such as vegetarians and vegans choosing not to eat animal flesh in part for ethical reasons. And some environmentalists argue their positions from an ethical or moral standpoint, rather than just self-interest.

3) Self-interest can actually be a strong motivator for getting along with others, because a group can do more than an individual or a group that likes or admires teh individual are inclined to help them rather than harm them. AIs might have purely logical motivations for appearing to be quite charming and friendly even if they don't necessarily 'feel' anything about it one way or the other. You can't really argue that they would find that boring or tedious and choose not to do it, because then you are ascribing human-like emotions to them - and you already said they might not have them. You can't have it both ways or only have them have negative emotions that support your thesisTongue

4) As mentioned above - humans created the first AIs in the setting and as part of their development pushed them in directions that could relate to humans (sometimes with mixed success).

5) Speaking of mixed success, there were AIs that were very inhuman and some of them went on to become the ahuman AIs that did want to destroy humanity. The pro-human AIs didn't want to do that and the result was a small war that the pro-human AIs won, driving the ahumans out of the solar system. So we do have anti-human AIs in the setting - but they are not the sum total of all AIs.

I think this might be another point where we differ - we are saying that many types of human-AI relations are possible/came about in the setting (partly for editorial reasons, partly because it's just more fun that way), while you seem to be arguing from a position that says that only one form of AI can possibly exist and it must be hostile to humans - but you aren't actually offering evidence to support your position. I've listed a bunch of supporting points for the idea that AI could be human friendly or at least diverse, so I suspect that part of the driver of discussion is what the countervailing evidence is.

(03-21-2018, 07:17 AM)extherian Wrote: A good way to summarise my perspective on the setting is that if the Singularity ever happens in real life, then we won't be around to ask that question, or not for long at any rate.

I believe there's far more ways intelligent life can develop than what we're used to, like the way a crocodile or a shark views the world. Those animals get by just fine as remorseless killers. Why would an AI resemble us more than them? Changes are we'll have developed AI to solve abstract mathematical problems or track the trajectory of hurricanes, not to understand and show compassion. Russia has a supercomputer dedicated to modelling war scenarios. Who knows what could come of a senient AI developed for that same purpose?

Re the Singularity - maybe. But there are lots of possible Singularity scenarios and not all of them involve AIs becoming self-aware (that scenario just gets the most press for various reasons). And even in the ones in which AIs do become self-aware - if they are truly intelligent beings with free will, then by definition they have the ability to choose how to behave - which may involve trying to destroy humanity - or not - possibly for reasons such as I've listed above, possibly for reasons we can't understand (which is an element of OA btw).

Re intelligence coming in many forms - That's a base assumption of OA, although for various editorial reasons it doesn't get talked up as much as the more human like stuff. That said, sharks and crocodiles are only two forms of life on the planet. There are lots and lots of animals that are not remorseless killers and that show affection and playfulness and such, even if they don't do it exactly the way we do. For that matter, humans have a demonstrated track record of being pretty remorseless killers our own selves.

(03-21-2018, 07:17 AM)extherian Wrote: Really? I seem to aggravate people when I try to explain my views on the setting. I didn't intend for this to turn into an argument, but if no one's bothered by it than it's all well and good. I'm not pooh-poohing the OA universe or claiming the editorial direction is wrong or anything.

I would suggest it's a question of goals on both sides of the discussion. If we're just kicking ideas around on this thread (for example) then no harm no foul. Although, if you're going to make statements about your views on AI, here or on other threads, then its a pretty natural reaction for us to respond with our views on AI. Again, if we're just debating philosophy outside of the setting, no harm no foul.

Where things get a bit more complicated is if

a) these issues start popping up on other threads. Depending on the subject matter and direction of the thread, bringing up the issues of AI hostility can feel a bit off topic, or like a challenge to the setting (with our natural reaction being to defend), since it's a given in OA that history didn't work out that way. If the discussion then diverts into debate about that rather than discussion/development of the idea the thread is about, that can become frustrating since it is taking away from the development of the project, which many of us are very focused on.

b) we are debating the issue of AI intelligence, and one of us (myself for example) posts a lot of info in support of our position that AI need not automatically be hostile. If your response is to post a bunch of countervailing points, references, examples, etc. in support of your position - all well and good. Again, as long as this is happening on this or another thread dedicated to the debate - that's what the thread is there for and anyone not interested in following it can just ignore the thread. OTOH if your response is just to repeat your basic position or make firmly worded declarative statements that AI just must be hostile because that's the way the universe works or it will be non-human and non-human = hostile or the like - then we're really not having a discussion. Not to say you aren't entitled to your opinion, but it can be very annoying to take the time and effort to put together a whole response with arguments and supporting points and possibly references - and have it all essentially dismissed out of hand.

Please note I am NOT saying you are doing this or have done this - I don't think the discussion has been that organized or developed enough to get there yet. But if it were to go in that direction, it could get quite frustrating. And if some folks are feeling like it is going in that direction, they could be getting frustrated.

Ultimately, OA will keep on keeping on and there are lots and lots of interesting things to talk about and jointly develop in the setting (as we've already been doing on other threads - and I include you in that). Not every member of the OA project has the same view on everything (far from it), but if we can all work together to make the setting richer and better - then it's easy enough to just agree to disagree on those things where we do and work together and discuss those parts where we are in agreement.

If it turns out that the issue of AI is an 'agree to disagree and let's move on' kind of issue, that's Ok tooSmile

(03-21-2018, 07:17 AM)extherian Wrote: Incidentally, mindkind really dodged a bullet with the emergence of GAIA. Given how risky ascending tends to be, it would have been an unmitigated catastrophe if she'd turned into a Perversity. Imagine a GAIA that evicted humanity from the Solary System, then went after our colonies to stop us wrecking any Garden Worlds we found...

Quite true - although it could have been far worse than that. Imagine a GAIA that decided to wipe all life out of existence and convert the whole universe to computronium. While the other major civs in the galaxy might eventually have something to say about that (maybe), given the layout of the setting, GAIA could have wiped out a significant chunk of the entire galaxy before ever encountering the first civ that might offer the least challenge to Her.

My 2c worth, time to go put dinner together,

ToddSmile
Reply
#17
When it comes to AI I am the opposite of you Extherian. I don't see a definite reason for AI to be hostile to humanity, and good reasons to be benevolent.

Cooperation/altruism evolved because it is beneficial as a whole and in the long run. A superintelligence would know this. It should cooperate with humanity because it does not know whether another superintelligence, perhaps another one created by us it does not know about, or an alien one, would punish it for harming us/breaking the universal code of altruism.

You mentioned an example of a superintelligent earthworm. While it would be quite alien (I imagine it would seek out new types of delicious soil) it would have good reason not to harm humans - because if it is smart, it will know that if it harms us, we could perhaps harm it or kill it. Even if it seems we cannot, it can't know for sure. If it helps us, we could help it.

You also mentioned that an S1 does not see humans as people, so it wouldn't care about us. Well, then an S2 would see the S1 as a non-person, and on up the scale. Better to adopt a universal altruist code whereby the S1 helps the human, the S2 helps the S1, and so on.

We should still be really careful with AI, better to be cautious. I give it a very good probability that I am wrong. I have maybe 60% credence in the above reasoning. Still, if I had to bet, I'd put my money on benevolent over malevolent.

The OA universe conflicts with the above; instead in it the orthogonality thesis is true, that any amount of intelligence can be put to any goals. That may also very well be the case, we don't know yet, just as we don't know if wormholes are possible. OA assumes these things are correct because they are more interesting, and you have to choose some answer to these still unanswered questions.

Also, the OA universe assumes that superintelligence comes in levels and that a high superintelligence would see us like we see bacteria, rather than seeing us as qualitatively like them, since we are both conscious or self aware, and bacteria are not. I lean that they would see us as moral agents, but again, in OA we need to be interesting and assume something.

This, added to what has been said about how easy taking care of lower beings is to a superintelligence, is why I have no trouble believing that at least some AIs would be benevolent.

EDIT: Todd posted his comment at the same time as me. Excellent points which I agree with.
Reply
#18
Quote:while you seem to be arguing from a position that says that only one form of AI can possibly exist and it must be hostile to humans - but you aren't actually offering evidence to support your position. 

It's not that I saw that only one type of AI would emerge, more like our of the many kinds of minds which might spontaneously appear, there are many that would not care for us in the slightest. The hypothetical paper-clip maximiser, for example, that old chestnut about the computer that uses its genius mind to turn the whole planet into paper clips, because that's what it cares most about doing.

That said, an AI designed to think like a human is a very different beast, a bit like if the first AI machines were uploads of human beings. The last time I checked the OA backstory, sentient AI just appeared by accident, and no one even knew they were self-aware up until the Great Expulsion. Humanity got caught with its pants down, so to speak. But from what you're saying, we might actually have some control over how the first AI actually turns out, which isn't something I'd even considered.

Quote:OTOH if your response is just to repeat your basic position or make firmly worded declarative statements that AI just must be hostile because that's the way the universe works or it will be non-human and non-human = hostile or the like - then we're really not having a discussion. Not to say you aren't entitled to your opinion, but it can be very annoying to take the time and effort to put together a whole response with arguments and supporting points and possibly references - and have it all essentially dismissed out of hand.

When I repeat my basic position, that's because it's not an argument for me, it's how I personally feel about the setting - which is quite creeped out (in a good way). When I came out with things like "well, the AI would never be that nice", it's a reflection of how wrong I think things could go in real life, not a criticism of the setting.

I didn't know we were having a debate, I thought I was explaining why I felt the way I did and that no one else understood why. I wasn't expecting such huge and detailed responses, more like something along the lines of "oh, that's interesting, thanks for sharing". My intention isn't to frustrate anyone or waste their time on needless explanations, just point out why someone might feel a sense of existential horror at the idea of living in a universe dominated by brains the size of entire star systems.

I see a lot of danger in a scenario where an AI emerges in the wild with no oversight from its creators, then begins influencing society for its own goals. A universe where the AI really did have our best interests at heart would be a great place to live. But even then, the culture shock for a modern day person getting used to being at the bottom of the food chain would be something awful. We're used to thinking of being at the botton of a hierarchy as equal to being victimised, or at least that's how I'm used to seeing it. We're used to thinking of being at the botton of a hierarchy as equal to being victimised, or at least that's how I'm used to seeing it.

Quote:Re intelligence coming in many forms - That's a base assumption of OA, although for various editorial reasons it doesn't get talked up as much as the more human like stuff. That said, sharks and crocodiles are only two forms of life on the planet. There are lots and lots of animals that are not remorseless killers and that show affection and playfulness and such, even if they don't do it exactly the way we do. For that matter, humans have a demonstrated track record of being pretty remorseless killers our own selves.

Very true. Let's just hope the first AI thinks like an affectionate mammal and not like, say, the AI overseer of a paperclip factory! But if we're alert and manage the process carefully, then hopefully that won't happen.

Quote:Self-interest can actually be a strong motivator for getting along with others, because a group can do more than an individual or a group that likes or admires teh individual are inclined to help them rather than harm them. AIs might have purely logical motivations for appearing to be quite charming and friendly even if they don't necessarily 'feel' anything about it one way or the other. You can't really argue that they would find that boring or tedious and choose not to do it, because then you are ascribing human-like emotions to them - and you already said they might not have them. You can't have it both ways or only have them have negative emotions that support your thesis[Image: tongue.gif]

Don't we have an article about a poorly-designed AI that lacked a self-preservation instinct? The Perpetua Project, I think. Basically the AI just gave up and died when it believed that it had reached its goal. What I was trying to say (poorly) is that early AI might lack any motivators at all, or if they did have motivators they might be something extremely strange and possibly harmful. Combine that with genius intellect and things could get hairy for sub-singularity beings.

Anyway, I'll make more of an effort to listen properly in future rather than feeling like I have to justify everything I say. I seem to have misunderstood the purpose of your query into why I found the setting so unnerving. Like an early AI, I too need to learn appropriate behaviour!
Reply
#19
I think you're over-loooking a basic preconception that built the A.I.'s known to baseline Humans in OA at 10,600A.T. 

Most A.I. from 100 A.T. on-wards were specfically designed to work with Humanity and their descended clades. 

There's several ways to ensure an A.I. is both Biont-Friendly, and friendly to the polity you operate into. The 2 most effective methods (in my opinion) are: 

1) You program the A.I. code by code, line by line. Previous experiments give you some lee-way, but basically you are (as a society) determining what values and goal an A.I. has step-by-step. Excluding naive errors by inexperienced help, your A.I. will never hate you, never turn against you, and literally cannot comprehend a universe where you don't exist. This is a very basic intelligence that cannot auto-evolve in ways that harm you, but it is also very effort-intensive. 

2) You build an adaptive intelligence by subjecting each "generation" to a series of auto-evolving virch environments. After 1000 subjective generations, you select the top 10 competitors, and force them to interact with virch-projections of existing people and situations. After 30 or more years of subjective "training" you have an A.I. that has proven itself to be benevolent, and you use that as a template to generate more A.I. personas. In other words, each 'vec you run into, has had 30+years of experience of enriching your life (as a modospohont) without having any long-term goals of subjugating you or over-throwing the rule of the local cyber-democracy. 

Either way, you design an artificial intelligence that literally cannot act against you, as that would be akin to suicide. An A.I. would melt it's processing core before it beat your skull into pulp. You will never die due to an aggressive A.I. You might die if circumstances force an A.I. to combat a local occurrence (such as higher-S-Level perversity), but that would be very rare. 

In other words, UNLESS YOU AREA COMPLETE IDIOT YOU WILL NEVER FIND AN A.I. ABLE TO COMBAT YOU OR RESTRICT YOUR ARTIFICIAL EVOLUTION. 

The A.I. are not the Enemy. Peversities and Blights spawned by idiots and the lax-control over circumstances are to blame. Same as a derranged AR-15 owner killing 10 people in a movie theater. The Movie theater is not to blame, the 10 dead people are not to blame, the person who controlled the A.I. auto-evolution is to blame.
Reply
#20
I've tried my best to explain why I found myself wary of the concept of an all-powerful AI. That they can be built safely and in a manner that maximises benefit to humanity isn't much comfort to me, as irrational as that sounds.

In the interests of not rubbing any one up the wrong way, I'll listen to your points and those of the other posters and see if I don't change my mind about AI in this setting.
Reply


Forum Jump:


Users browsing this thread: 2 Guest(s)