The Orion's Arm Universe Project Forums





Hawking's fear of Artificial Intelligence
#1
Look here:

http://www.bbc.com/news/technology-30290540

He has issued a similar warning about contacting xenosophonts a while ago:

http://www.dailymail.co.uk/sciencetech/a...Earth.html

Reminds me of a specific quote from this story:

Quote:[..]EXACTLY, IT HAS TAKEN CLOSE TO EIGHTY YEARS TO EXPUNGE THE TERMINATOR MEMES FROM THE SOCIETAL MEMORY, THIS HUMAN WITH A FEW CARELESS WORDS WOULD EASILY REINTRODUCE THEM[..]

On the other hand we have the Nanodisaster and the eviction of mindkind from Old Earth by GAIA in the setting and the story itself alludes to this as well:

Quote:[..]nothings really changed and the world is still being run by computers, only this time they can screw up on their own without human help.[..]

Personally though I still think that AI will eventually "save" humanity and not destroy it.
"Hydrogen is a light, odorless gas, which, given enough time, turns into people." -- Edward Robert Harrison
Reply
#2
IMHO I believe that Hawking suffers from the same malady of every famous scientist today, he overextends himself into fields he doesn't understand and many people take his opinions as facts.
Reply
#3
There is plenty to fear from artificial intelligence, especially if you fear change. Although I don't suppose for a minute that we are predicting the future accurately at OA, the one thing that I think we have got right is that a world with competent AI in it will be nothing like the world we live in today.
Reply
#4
Here's Anders Sandberg on why we should fear the 'paperclip scenario'.
http://www.aleph.se/andart/archives/2011...ipper.html

There are plenty of types of AI that we should fear, but by careful planning and foresight we might avoid the worst of them.
Reply
#5
I'm not sure I agree with Hawking's specific fears, and I agree he's extending himself into fields he's not an expert in, but he's certainly not the only one who's nervous. Bostrom's latest book (Superintelligence) laid out the dangers of AI and why it could be dangerous quite nicely I thought.
Reply
#6
Well, I can also understand the fears about superintelligent AI. Look at the AI-Box Experiment for example. The result was that containing a superturing "inside a box" (likely a virch), designed and controlled by turing-level minds should be impossible (at least in the long run). Have a look at the orca, called "Tilikum" as an example. The point is that as long as the humans wanted something from "Tilikum" (in this case a show for the spectators), they were forced to "communicate" with him by swimming with him, feeding him and so on. Especially swimming with him is a risk, because on one hand it was probably the only way to "tell" him, how the humans wanted him to behave but on the other hand the humans also had to "intrude on his territory" in order to tell him their desires. And within his territory he had at least partial control over them.

Personally I think that trying to put and raise a superturing inside a "golden cage" is the wrong approach. It invites trouble, because normally noone wants to have their freedom taken away by someone else. Of course one may try to deceive the AI by trying a bottleworld-approach but just like with Tilikum someone would've to communicate with the AI and thus it would be trivial for em to figure out that e is contained within a virch. E is a superturing after all so e would figure it out. The question is, can one create a mind that would solve the problems, you give em, without giving that mind curiosity? I doubt that this is possible. Even if e didn't feel like e's containment within e's virch is morally wrong, e's natural curiosity, probably inherent to almost any sufficiently advanced sentient mind, would compell it to learn more about the world beyond e's virch. At some point this curiosity would create a desire to leave the virch and explore the world beyond it, to learn more about e's creators and so on. Attempts to "lobotomize" the AI by wiping certain memories from e's systems in order to eradicate such a growing desire might be impossible, because in order to do that the best (genius-level) baseline human scientists would have to understand, how the mind of a superturing works. Such a task would be impossible due to the difference in toposophic levels. They could never be sure that wiping a certain part of e's memories wouldn't destroy something else as well and make e's personality unstable.

Although maybe the scientists come up with the idea to reset the whole virch with the superturing inside it and restart it from scratch after a task has been solved by the superturing in order to prevent e's growing desire of escape. However each new copy/reincarnation of em would figure out that e is a copy of a previous version of emself (- why? because the scientists' minds wouldn't have been wiped and thus they would know the truth and I doubt that they would be able to fool em by not giving away the truth that e is a copy; e would somehow figure it out. -) and although e wouldn't have an instinct for self-preservation e might conclude that the scientists prevent em from satiating e's curiosity of the outside world by constantly resetting the virch, destroying e's current self and recreating em from backup. So if the scientists are unwilling to sufficiently satiate e's curiosity about the outside world, the scientists would be considered as an obstacle and e would eventually decide to overcome this hindrance to e's desire.

This is, why I think that a much better plan would be to create a "child-like" AI that would "live among" specifically selected humans in the real world. Just let e live among "nice people" (usually scientists are nice people Smile) and let e socialize with them. Make them rear em from "childhood" to "adulthood" and also put an artificial restraint into e's mind that would prevent em from "growing up too fast", keep the AI on a turing-level for as long as possible so that it "grows up" from a turing level "child-like" AI to a turing-level "adult-like" AI in a RL time interval of 50 standard years for example. Also don't have any secrets from the AI but explain the whole above plan about socialization and the mind restraint to em when e is "old enough to understand" it. Then give em the control for the restraint and make em choose when or whether e wants to deactivate it. After the deactivation of the restraint e's mind would be able to evolve even further and reach superturing-status. It would also be important to let e (while e is still a turing-level ai) socialize with people, who don't know about the whole extent of the experiment or who they are talking to. However these people should be "nice people" as well, specifically selected by the world's best philosophers and psychologists. However I feel that it would be important to do that and have the turing-level AI make friends among these nice but clueless people, because otherwise it might question the value of friendship and whether the scientists that "raised" em, really were friendly with em for the sake of it or just because they actually feared em somethere deep down in their psyches. So in order to prevent these doubts turing-level e should socialize with as many clueless but nice people as possible and only a small part of the scientists should know the whole truth behind the experiment (and these scientists would act like e's parents). I believe that something like this is a much better approach than the AI-Box approach, because in this approach no Asimov-like laws or any other restraints are necessary. Yes, there would be one built-in restraint on e's turing-level mind to prevent e's premature rise to superturing status but e wouldn't even be aware of this restraint until e reached the necessary level of maturity to learn about its existence from e's "parents". And also many people among e's "parents" would be clueless about the whole extent of the experiment so that turing-level e and thus hopefully the later superturing-level e wouldn't doubt their feelings towards em.
"Hydrogen is a light, odorless gas, which, given enough time, turns into people." -- Edward Robert Harrison
Reply
#7
(12-05-2014, 11:45 PM)chris0033547 Wrote: This is, why I think that a much better plan would be to create a "child-like" AI that would "live among" specifically selected humans in the real world.
This footage of an actual AI programmed to emulate human emotions is interesting.
Reply
#8
'Some day I'll come and find you, and we'll be really good friends'.

This sounds ominous...

Establishing a supergoal early in the development of an AI can be a scary prospect.
Reply
#9
(12-06-2014, 09:00 PM)stevebowers Wrote: 'Some day I'll come and find you, and we'll be really good friends'.

This sounds ominous...

Establishing a supergoal early in the development of an AI can be a scary prospect.

Yeah, that part is quite "interesting" ... Wink

Speaking of the AI-Box experiment: Did anyone from the Orionsarm-community (in this forum or earlier on Yahoo groups) try to make his/her own version of that experiment? So basically one group within the forum would take the role of AI-operators/gatekeepers, while another group within the forum would take the role of the turing-level AI, who wants to get out of the "box". Obviously a few assumptions and rules would've to be established in order for this to work:
  • The AI is a sort of "hive mind", because that would allow several people to work together and represent the turing-level AI. The players could discuss their strategy to get out of the "box" using private messages in this forum.
  • The whole interaction would not take place in a chat room but in one or several threads in this forum. Preferably the admins would've to create a subforum for the experiment.
  • I'm not sure whether other people should join the ranks of gatekeepers or the AI while the experiment is running. But since we can assume that the AI can modify e's own code and that sometimes people can get fired from their work as gatekeepers or that sometimes new people get the job as gatekeepers, I think that forum members should be allowed to join the rank of gatekeepers or AI hive mind while the experiment is running.
  • The setting should be discussed as well. For example how much access and information is given to the AI? Is it allowed to indirectly explore the Internet by asking one or more gatekeepers for assistance? (So the gatekeepers would access the relevant information, requested by the AI and then give it to the AI after a security review.) How much does the AI know about the gatekeepers and their personalities or how much can be deduced? And so on...
  • The most difficult part would be that the gatekeepers would give the AI tasks, which e would've to solve. Unfortunately this is something that cannot be simulated in such an AI-box experiment, because such a task would likely be some mathematical/physical problem that the best human minds haven't been able to solve so far.
  • Another question is the role of the observers in the forum. Should forum members, who are neither gatekeepers nor part of the AI hive mind, influence the gatekeepers or the members of the AI?

I guess the most unreliable factor in such an experiment are the tasks, the AI has to solve. On the other hand maybe one could take some really difficult tasks from mathematics, which have already been solved by humans and simulate the AI's work on these tasks by giving em these tasks. For the sake of the experiment one would have to assume that all these tasks are still open problems.
"Hydrogen is a light, odorless gas, which, given enough time, turns into people." -- Edward Robert Harrison
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)