The Orion's Arm Universe Project Forums





Hawking's fear of Artificial Intelligence
#9
(12-06-2014, 09:00 PM)stevebowers Wrote: 'Some day I'll come and find you, and we'll be really good friends'.

This sounds ominous...

Establishing a supergoal early in the development of an AI can be a scary prospect.

Yeah, that part is quite "interesting" ... Wink

Speaking of the AI-Box experiment: Did anyone from the Orionsarm-community (in this forum or earlier on Yahoo groups) try to make his/her own version of that experiment? So basically one group within the forum would take the role of AI-operators/gatekeepers, while another group within the forum would take the role of the turing-level AI, who wants to get out of the "box". Obviously a few assumptions and rules would've to be established in order for this to work:
  • The AI is a sort of "hive mind", because that would allow several people to work together and represent the turing-level AI. The players could discuss their strategy to get out of the "box" using private messages in this forum.
  • The whole interaction would not take place in a chat room but in one or several threads in this forum. Preferably the admins would've to create a subforum for the experiment.
  • I'm not sure whether other people should join the ranks of gatekeepers or the AI while the experiment is running. But since we can assume that the AI can modify e's own code and that sometimes people can get fired from their work as gatekeepers or that sometimes new people get the job as gatekeepers, I think that forum members should be allowed to join the rank of gatekeepers or AI hive mind while the experiment is running.
  • The setting should be discussed as well. For example how much access and information is given to the AI? Is it allowed to indirectly explore the Internet by asking one or more gatekeepers for assistance? (So the gatekeepers would access the relevant information, requested by the AI and then give it to the AI after a security review.) How much does the AI know about the gatekeepers and their personalities or how much can be deduced? And so on...
  • The most difficult part would be that the gatekeepers would give the AI tasks, which e would've to solve. Unfortunately this is something that cannot be simulated in such an AI-box experiment, because such a task would likely be some mathematical/physical problem that the best human minds haven't been able to solve so far.
  • Another question is the role of the observers in the forum. Should forum members, who are neither gatekeepers nor part of the AI hive mind, influence the gatekeepers or the members of the AI?

I guess the most unreliable factor in such an experiment are the tasks, the AI has to solve. On the other hand maybe one could take some really difficult tasks from mathematics, which have already been solved by humans and simulate the AI's work on these tasks by giving em these tasks. For the sake of the experiment one would have to assume that all these tasks are still open problems.
"Hydrogen is a light, odorless gas, which, given enough time, turns into people." -- Edward Robert Harrison
Reply


Messages In This Thread
RE: Hawking's fear of Artificial Intelligence - by chris0033547 - 12-07-2014, 04:20 AM

Forum Jump:


Users browsing this thread: 1 Guest(s)