The Orion's Arm Universe Project Forums





Hawking's fear of Artificial Intelligence
#6
Well, I can also understand the fears about superintelligent AI. Look at the AI-Box Experiment for example. The result was that containing a superturing "inside a box" (likely a virch), designed and controlled by turing-level minds should be impossible (at least in the long run). Have a look at the orca, called "Tilikum" as an example. The point is that as long as the humans wanted something from "Tilikum" (in this case a show for the spectators), they were forced to "communicate" with him by swimming with him, feeding him and so on. Especially swimming with him is a risk, because on one hand it was probably the only way to "tell" him, how the humans wanted him to behave but on the other hand the humans also had to "intrude on his territory" in order to tell him their desires. And within his territory he had at least partial control over them.

Personally I think that trying to put and raise a superturing inside a "golden cage" is the wrong approach. It invites trouble, because normally noone wants to have their freedom taken away by someone else. Of course one may try to deceive the AI by trying a bottleworld-approach but just like with Tilikum someone would've to communicate with the AI and thus it would be trivial for em to figure out that e is contained within a virch. E is a superturing after all so e would figure it out. The question is, can one create a mind that would solve the problems, you give em, without giving that mind curiosity? I doubt that this is possible. Even if e didn't feel like e's containment within e's virch is morally wrong, e's natural curiosity, probably inherent to almost any sufficiently advanced sentient mind, would compell it to learn more about the world beyond e's virch. At some point this curiosity would create a desire to leave the virch and explore the world beyond it, to learn more about e's creators and so on. Attempts to "lobotomize" the AI by wiping certain memories from e's systems in order to eradicate such a growing desire might be impossible, because in order to do that the best (genius-level) baseline human scientists would have to understand, how the mind of a superturing works. Such a task would be impossible due to the difference in toposophic levels. They could never be sure that wiping a certain part of e's memories wouldn't destroy something else as well and make e's personality unstable.

Although maybe the scientists come up with the idea to reset the whole virch with the superturing inside it and restart it from scratch after a task has been solved by the superturing in order to prevent e's growing desire of escape. However each new copy/reincarnation of em would figure out that e is a copy of a previous version of emself (- why? because the scientists' minds wouldn't have been wiped and thus they would know the truth and I doubt that they would be able to fool em by not giving away the truth that e is a copy; e would somehow figure it out. -) and although e wouldn't have an instinct for self-preservation e might conclude that the scientists prevent em from satiating e's curiosity of the outside world by constantly resetting the virch, destroying e's current self and recreating em from backup. So if the scientists are unwilling to sufficiently satiate e's curiosity about the outside world, the scientists would be considered as an obstacle and e would eventually decide to overcome this hindrance to e's desire.

This is, why I think that a much better plan would be to create a "child-like" AI that would "live among" specifically selected humans in the real world. Just let e live among "nice people" (usually scientists are nice people Smile) and let e socialize with them. Make them rear em from "childhood" to "adulthood" and also put an artificial restraint into e's mind that would prevent em from "growing up too fast", keep the AI on a turing-level for as long as possible so that it "grows up" from a turing level "child-like" AI to a turing-level "adult-like" AI in a RL time interval of 50 standard years for example. Also don't have any secrets from the AI but explain the whole above plan about socialization and the mind restraint to em when e is "old enough to understand" it. Then give em the control for the restraint and make em choose when or whether e wants to deactivate it. After the deactivation of the restraint e's mind would be able to evolve even further and reach superturing-status. It would also be important to let e (while e is still a turing-level ai) socialize with people, who don't know about the whole extent of the experiment or who they are talking to. However these people should be "nice people" as well, specifically selected by the world's best philosophers and psychologists. However I feel that it would be important to do that and have the turing-level AI make friends among these nice but clueless people, because otherwise it might question the value of friendship and whether the scientists that "raised" em, really were friendly with em for the sake of it or just because they actually feared em somethere deep down in their psyches. So in order to prevent these doubts turing-level e should socialize with as many clueless but nice people as possible and only a small part of the scientists should know the whole truth behind the experiment (and these scientists would act like e's parents). I believe that something like this is a much better approach than the AI-Box approach, because in this approach no Asimov-like laws or any other restraints are necessary. Yes, there would be one built-in restraint on e's turing-level mind to prevent e's premature rise to superturing status but e wouldn't even be aware of this restraint until e reached the necessary level of maturity to learn about its existence from e's "parents". And also many people among e's "parents" would be clueless about the whole extent of the experiment so that turing-level e and thus hopefully the later superturing-level e wouldn't doubt their feelings towards em.
"Hydrogen is a light, odorless gas, which, given enough time, turns into people." -- Edward Robert Harrison
Reply


Messages In This Thread
RE: Hawking's fear of Artificial Intelligence - by chris0033547 - 12-05-2014, 11:45 PM

Forum Jump:


Users browsing this thread: 1 Guest(s)