The Orion's Arm Universe Project Forums





Better late than never
#16
(03-21-2018, 07:17 AM)extherian Wrote: Because they're not human. They don't share ancestry with any living creature. There's no reason why a supercomputer that somehow became self-aware would anything in common with us, not even the most basic of emotions. Why would a being whose ancestry hasn't included the evolutionary pressure of needing to cooperate to survive have empathy? Being able to model human minds in great detail doesn't count, because there's a world of difference between being able to predict that someone will feel a certain way and actually feeling that way yourself.

Short answer - because we will build them to be human/get along with humans/have empathy and whatever other traits we deem desirable. At least when we get to the point of building what might be considered an 'AI as person' or 'artificial general intelligence'. Which is to say that if/when we get the ability to produce a fully intelligent, self-aware, capable of independent and abstract thought artificial intelligence we will also likely build or raise it in a way to make it as human as possible.

While this is not an area I follow as closely as I would like, from what I understand, in some respects AI already exists and is all around us all the time. From AI programs that trade stocks at computer speeds on Wall St. to 'friendly' systems that handle natural language commands like Alexa and Siri and Google Assistant, and lots of other things, we've developed a host of 'intelligent' (but not self-aware as far as we know) systems that are anywhere from pretty good to very good in a very narrow to fairly narrow field of endeavor - but they don't think in the sense that we normally mean by that word, nor are they self-aware or capable of generalizing beyond their function.

However, there is also anticipation in the AI research field that 'true' AI will eventually get created and a fair number of people think it will need to be 'raised' and grow up, not just be created fully formed - and some number of researchers are also looking into creating 'friendly' AI and also with giving AIs emotions like humans so they can better relate to us. At this point this involves a lot of work on figuring out what emotions even are and how to produce them in a cybernetic device, but the work is being done.

I think a base assumption you seem to be making here is that AI will just 'appear' along the lines of a given computer or network or the like evolving to self-awareness ala Skynet in the Terminator movies or the like. I don't personally find that likely, and think it more probable that humans will create the first 'true' artificial intelligences, one way or the other.

Certainly in the OA setting, that is how it happened. Although, that said, the early AIs were something of a first example of the existence and scope of the 'Toposophic Landscape' and some of them used very different thought processes to reach a given conclusion, even if it looked like the same conclusion humans would reach in the same situation. And some of them were just plain strange. But as the state of the art improved, humans both got better at developing 'human like' (or at least human relatable) AIs and at learning to work with/relate to the less human ones.

(03-21-2018, 07:17 AM)extherian Wrote: It would be like if a nest of ants or an earthworm somehow became more intelligent than a human being. For all we know, their mindset might be so alien they might not even realise humans are sentient. It's not that I think that AI would be evil, more like they might not even have the most basic curiosity about their fellow beings, and would inadvertently trample over anything that got in their way.

In other words, I understand that the AI are benevolent because the OA authors chose to write them that way, but I don't think it would work out like that in real life. I found it baffling that the setting even included Pro-Human AIs. Once the first AIs hit S1, they'd see humans like the dumb animals we are and not as people, because compared to S1 beings we really aren't people.

Various things here -

1) Many (not all) humans put quite a lot of effort into looking after the well-being of dumb animals, or even fish, insects, or plants. And many claim empathy for the animals/bugs/plants to one degree or another.

2) Many (not all) humans will go out of their way to avoid harming other animals, even insects. I avoid stepping on bugs if at all possible. My mother will either ignore spiders or pick them up in a glass or something and release them outside (I do the same actually). I will swat a mosquito, but my mom and sister will both ignore them. Some people go even further, or approach this in various ways such as vegetarians and vegans choosing not to eat animal flesh in part for ethical reasons. And some environmentalists argue their positions from an ethical or moral standpoint, rather than just self-interest.

3) Self-interest can actually be a strong motivator for getting along with others, because a group can do more than an individual or a group that likes or admires teh individual are inclined to help them rather than harm them. AIs might have purely logical motivations for appearing to be quite charming and friendly even if they don't necessarily 'feel' anything about it one way or the other. You can't really argue that they would find that boring or tedious and choose not to do it, because then you are ascribing human-like emotions to them - and you already said they might not have them. You can't have it both ways or only have them have negative emotions that support your thesisTongue

4) As mentioned above - humans created the first AIs in the setting and as part of their development pushed them in directions that could relate to humans (sometimes with mixed success).

5) Speaking of mixed success, there were AIs that were very inhuman and some of them went on to become the ahuman AIs that did want to destroy humanity. The pro-human AIs didn't want to do that and the result was a small war that the pro-human AIs won, driving the ahumans out of the solar system. So we do have anti-human AIs in the setting - but they are not the sum total of all AIs.

I think this might be another point where we differ - we are saying that many types of human-AI relations are possible/came about in the setting (partly for editorial reasons, partly because it's just more fun that way), while you seem to be arguing from a position that says that only one form of AI can possibly exist and it must be hostile to humans - but you aren't actually offering evidence to support your position. I've listed a bunch of supporting points for the idea that AI could be human friendly or at least diverse, so I suspect that part of the driver of discussion is what the countervailing evidence is.

(03-21-2018, 07:17 AM)extherian Wrote: A good way to summarise my perspective on the setting is that if the Singularity ever happens in real life, then we won't be around to ask that question, or not for long at any rate.

I believe there's far more ways intelligent life can develop than what we're used to, like the way a crocodile or a shark views the world. Those animals get by just fine as remorseless killers. Why would an AI resemble us more than them? Changes are we'll have developed AI to solve abstract mathematical problems or track the trajectory of hurricanes, not to understand and show compassion. Russia has a supercomputer dedicated to modelling war scenarios. Who knows what could come of a senient AI developed for that same purpose?

Re the Singularity - maybe. But there are lots of possible Singularity scenarios and not all of them involve AIs becoming self-aware (that scenario just gets the most press for various reasons). And even in the ones in which AIs do become self-aware - if they are truly intelligent beings with free will, then by definition they have the ability to choose how to behave - which may involve trying to destroy humanity - or not - possibly for reasons such as I've listed above, possibly for reasons we can't understand (which is an element of OA btw).

Re intelligence coming in many forms - That's a base assumption of OA, although for various editorial reasons it doesn't get talked up as much as the more human like stuff. That said, sharks and crocodiles are only two forms of life on the planet. There are lots and lots of animals that are not remorseless killers and that show affection and playfulness and such, even if they don't do it exactly the way we do. For that matter, humans have a demonstrated track record of being pretty remorseless killers our own selves.

(03-21-2018, 07:17 AM)extherian Wrote: Really? I seem to aggravate people when I try to explain my views on the setting. I didn't intend for this to turn into an argument, but if no one's bothered by it than it's all well and good. I'm not pooh-poohing the OA universe or claiming the editorial direction is wrong or anything.

I would suggest it's a question of goals on both sides of the discussion. If we're just kicking ideas around on this thread (for example) then no harm no foul. Although, if you're going to make statements about your views on AI, here or on other threads, then its a pretty natural reaction for us to respond with our views on AI. Again, if we're just debating philosophy outside of the setting, no harm no foul.

Where things get a bit more complicated is if

a) these issues start popping up on other threads. Depending on the subject matter and direction of the thread, bringing up the issues of AI hostility can feel a bit off topic, or like a challenge to the setting (with our natural reaction being to defend), since it's a given in OA that history didn't work out that way. If the discussion then diverts into debate about that rather than discussion/development of the idea the thread is about, that can become frustrating since it is taking away from the development of the project, which many of us are very focused on.

b) we are debating the issue of AI intelligence, and one of us (myself for example) posts a lot of info in support of our position that AI need not automatically be hostile. If your response is to post a bunch of countervailing points, references, examples, etc. in support of your position - all well and good. Again, as long as this is happening on this or another thread dedicated to the debate - that's what the thread is there for and anyone not interested in following it can just ignore the thread. OTOH if your response is just to repeat your basic position or make firmly worded declarative statements that AI just must be hostile because that's the way the universe works or it will be non-human and non-human = hostile or the like - then we're really not having a discussion. Not to say you aren't entitled to your opinion, but it can be very annoying to take the time and effort to put together a whole response with arguments and supporting points and possibly references - and have it all essentially dismissed out of hand.

Please note I am NOT saying you are doing this or have done this - I don't think the discussion has been that organized or developed enough to get there yet. But if it were to go in that direction, it could get quite frustrating. And if some folks are feeling like it is going in that direction, they could be getting frustrated.

Ultimately, OA will keep on keeping on and there are lots and lots of interesting things to talk about and jointly develop in the setting (as we've already been doing on other threads - and I include you in that). Not every member of the OA project has the same view on everything (far from it), but if we can all work together to make the setting richer and better - then it's easy enough to just agree to disagree on those things where we do and work together and discuss those parts where we are in agreement.

If it turns out that the issue of AI is an 'agree to disagree and let's move on' kind of issue, that's Ok tooSmile

(03-21-2018, 07:17 AM)extherian Wrote: Incidentally, mindkind really dodged a bullet with the emergence of GAIA. Given how risky ascending tends to be, it would have been an unmitigated catastrophe if she'd turned into a Perversity. Imagine a GAIA that evicted humanity from the Solary System, then went after our colonies to stop us wrecking any Garden Worlds we found...

Quite true - although it could have been far worse than that. Imagine a GAIA that decided to wipe all life out of existence and convert the whole universe to computronium. While the other major civs in the galaxy might eventually have something to say about that (maybe), given the layout of the setting, GAIA could have wiped out a significant chunk of the entire galaxy before ever encountering the first civ that might offer the least challenge to Her.

My 2c worth, time to go put dinner together,

ToddSmile
Reply


Messages In This Thread
Better late than never - by extherian - 03-19-2018, 10:06 PM
RE: Better late than never - by Rynn - 03-19-2018, 11:46 PM
RE: Better late than never - by Drashner1 - 03-20-2018, 02:52 AM
RE: Better late than never - by extherian - 03-20-2018, 05:46 AM
RE: Better late than never - by Rynn - 03-20-2018, 06:02 AM
RE: Better late than never - by Drashner1 - 03-20-2018, 02:07 PM
RE: Better late than never - by extherian - 03-20-2018, 06:40 AM
RE: Better late than never - by ProxCenBound - 03-20-2018, 07:45 AM
RE: Better late than never - by extherian - 03-20-2018, 07:15 PM
RE: Better late than never - by Vaktus - 03-21-2018, 02:56 AM
RE: Better late than never - by Rynn - 03-21-2018, 04:32 AM
RE: Better late than never - by extherian - 03-21-2018, 04:58 AM
RE: Better late than never - by Tengu459 - 03-21-2018, 04:30 AM
RE: Better late than never - by Drashner1 - 03-21-2018, 05:09 AM
RE: Better late than never - by extherian - 03-21-2018, 07:17 AM
RE: Better late than never - by Drashner1 - 03-21-2018, 08:53 AM
RE: Better late than never - by ProxCenBound - 03-21-2018, 08:56 AM
RE: Better late than never - by extherian - 03-21-2018, 09:49 AM
RE: Better late than never - by Drashner1 - 03-21-2018, 12:56 PM
RE: Better late than never - by Tengu459 - 03-21-2018, 10:40 AM
RE: Better late than never - by extherian - 03-21-2018, 10:50 AM
RE: Better late than never - by QwertyYerty - 03-21-2018, 12:24 PM
RE: Better late than never - by extherian - 03-21-2018, 07:17 PM
RE: Better late than never - by Tengu459 - 03-22-2018, 08:14 AM
RE: Better late than never - by extherian - 03-22-2018, 09:10 AM
RE: Better late than never - by Tengu459 - 03-22-2018, 11:09 AM
RE: Better late than never - by JohnnyYesterday - 03-23-2018, 06:20 AM
RE: Better late than never - by Drashner1 - 03-23-2018, 01:08 PM
RE: Better late than never - by Alphadon - 03-24-2018, 04:20 PM
RE: Better late than never - by Tengu459 - 03-24-2018, 07:08 PM
RE: Better late than never - by Alphadon - 03-24-2018, 07:38 PM
RE: Better late than never - by Drashner1 - 03-24-2018, 11:05 PM
RE: Better late than never - by selden - 03-25-2018, 12:29 AM
RE: Better late than never - by extherian - 03-25-2018, 12:41 AM
RE: Better late than never - by selden - 03-25-2018, 12:45 AM
RE: Better late than never - by Alphadon - 03-25-2018, 04:56 AM
RE: Better late than never - by Rynn - 03-25-2018, 05:13 AM
RE: Better late than never - by extherian - 03-25-2018, 05:50 AM
RE: Better late than never - by Rynn - 03-25-2018, 05:58 AM
RE: Better late than never - by extherian - 03-25-2018, 06:45 AM
RE: Better late than never - by Drashner1 - 03-25-2018, 07:50 AM
RE: Better late than never - by Alphadon - 03-25-2018, 05:27 PM
RE: Better late than never - by extherian - 03-25-2018, 08:51 PM

Forum Jump:


Users browsing this thread: 1 Guest(s)