The Orion's Arm Universe Project Forums





Link; Is Brain Emulation Dangerous?
#1
New paper by Peter Eckersley and Anders Sandberg;
Is Brain Emulation Dangerous?

Several scenarios are explored, including the intriguing and disturbing 'The Attacker Always Wins'

(.pdf can be downloaded from)
http://www.degruyter.com/view/j/jagi.201...format=INT

In OA we assume that the computing power arrives much sooner than the technology required for scanning human brains.

We also assume that non-human AIs that are not based on whole brain emulations arrive much earlier than whole brain emulations, and that partial, incomplete emulations can be made before complete emulations. These assumptions do not necessarily avert some of the uncomfortable conclusions that Eckersley and Sandberg arrive at; in fact they might make the situation worse, since a highly competent non-human AI would probably find hacking or enslaving a human upload/emulation relatively easy.
Reply
#2
Abstract

Brain emulation is a hypothetical but extremely transformative technology which has a non-zero chance of appearing during the next century. This paper investigates whether such a technology would also have any predictable characteristics that give it a chance of being catastrophically dangerous, and whether there are any policy levers which might be used to make it safer. We conclude that the riskiness of brain emulation probably depends on the order of the preceding research trajectory. Broadly speaking, it appears safer for brain emulation to happen sooner, because slower CPUs would make the technology‘s impact more gradual. It may also be safer if brains are scanned before they are fully understood from a neuroscience perspective, thereby increasing the initial population of emulations, although this prediction is weaker and more scenario-dependent. The risks posed by brain emulation also seem strongly connected to questions about the balance of power between attackers and defenders in computer security contests. If economic property rights in CPU cycles1 are essentially enforceable, emulation appears to be comparatively safe; if CPU cycles are ultimately easy to steal, the appearance of brain emulation is more likely to be a destabilizing development for human geopolitics. Furthermore, if the computers used to run emulations can be kept secure, then it appears that making brain emulation technologies ―open‖ would make them safer. If, however, computer insecurity is deep and unavoidable, openness may actually be more dangerous. We point to some arguments that suggest the former may be true, tentatively implying that it would be good policy to work towards brain emulation using open scientific methodology and free/open source software codebases.
Reply
#3
Non-zero means anything from 1 to 100%, so I'd be suprised if it appeared,
Reply
#4
So would I, to be honest. In the OA scenario full emulation of the human mind and brain does not occur until 330 AT, nearly three hundred years from today; this may be overly optimistic.

On the other hand if brain emulation is not developed until non-human AIs are in existence, the AIs might decide that this technology is superfluous.
Reply
#5
We also assume that non-human AIs that are not based on whole brain emulations arrive much earlier than whole brain emulations, and that partial, incomplete emulations can be made before complete emulations. These assumptions do not necessarily avert some of the uncomfortable conclusions that Eckersley and Sandberg arrive at; in fact they might make the situation worse, since a highly competent non-human AI would probably find hacking or enslaving a human upload/emulation relatively easy.

WHile we are close to having computers with power equal to human brain, we don't have the abilty to scan but we do have working models and ideas of how a brain could work, so simulation rather than emulation seems to be possible much earlier.

the best article i havd at the moment, but india hopes to build a full brain emulation computer in 12 years. http://www.dnaindia.com/scitech/report-h...er-1676558. Of course these are based on models of how we think the brain work rather than a copy of a working brain. Even if our models are wrong how do we know the computer won't 'wake up' using a slighty different model of intelligence to ours?

If we take the human brain power to be 36.8x10 to the power of 15. ^15
We are talking peta-scale here and terra scale is already commercially availble . ATI Radeon™ HD 5970 Graphics card has 4.64x 10^12 (circa £100) and peta-scale is in supercomputers. Our super computers are pretty close

20×10^ 15 IBM Sequoia Circa 2011,(http://www.google.co.uk/imgres?imgurl=ht...KEBEPwdMAo) (and of course there is always distrubuted computing where we use multiple machines ala folding @home 8.1x10^15.

Also in terms of raw computing power intel hopes by 2018 to have a one exaflop suoer computer. 10^18 http://www.informationweek.com/exaflop-s...id/1064939?


if moores law holds constant you can extraploate future power and by 2030 we would have zettascale computing 10^21 so the raw power is certainly there.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)