The Orion's Arm Universe Project Forums





What scientific ideas should society get rid of?
#1
Edge.org's annual question

What scientific ideas are ready for retirement?
http://www.edge.org/responses/what-scien...retirement

Not much hope for the future of simple, easy answers..
Reply
#2
I'm really surprised by how many people are saying "The Universe."

On another note, am I the only one who finds Roger Schank's anti-AI arguments to be intellectually lazy?
Reply
#3
(01-21-2014, 08:14 AM)omega_tyrant Wrote: I'm really surprised by how many people are saying "The Universe."

On another note, am I the only one who finds Roger Schank's anti-AI arguments to be intellectually lazy?

I don't know anything about his arguments. But it doesn't really matter, anyway.

An argument to use on people who don't believe in AI, who are rather few on this board I imagine:

I can show you an existence proof for the feasibility of a replicating network of replicating nanoassemblers, the assembly being complex enough and with enough computing power and sensory capability to have attained sapience. (Granted, this particular design requires the cooperation of another such network of slightly different design for its replication.) Want to see it? Look in the mirror.

People are made of matter, and people are sapient. (At least I am - but let's not go into solipsism!) Ergo, a material object can be sapient.

The only possible counter-argument to that relies on the putative existence of the supernatural.
Reply
#4
(01-21-2014, 09:46 AM)iancampbell Wrote:
(01-21-2014, 08:14 AM)omega_tyrant Wrote: I'm really surprised by how many people are saying "The Universe."

On another note, am I the only one who finds Roger Schank's anti-AI arguments to be intellectually lazy?

I don't know anything about his arguments. But it doesn't really matter, anyway.

An argument to use on people who don't believe in AI, who are rather few on this board I imagine:

I can show you an existence proof for the feasibility of a replicating network of replicating nanoassemblers, the assembly being complex enough and with enough computing power and sensory capability to have attained sapience. (Granted, this particular design requires the cooperation of another such network of slightly different design for its replication.) Want to see it? Look in the mirror.

People are made of matter, and people are sapient. (At least I am - but let's not go into solipsism!) Ergo, a material object can be sapient.

The only possible counter-argument to that relies on the putative existence of the supernatural.

To me his argument seemed like a way to save his face from new AI winter, otherwise I don't why he'd said it.
Reply
#5
"Miraculous coincidences"? *groan* It amazes me how intelligent people can be so stupid.

Do oceans exist so that marine creatures have a place to live? Of course not. Why should the most fundamental properties of the universe be any different? If they weren't what they are, everything would be different. Maybe there's a universe out there somewhere where patterns in charged clouds of other-matter are thinking that if their universe were much different, they couldn't exist--it's properties must have been tuned just-so by the Great Kra-Tob!
Reply
#6
(01-25-2014, 09:26 AM)JohnnyYesterday Wrote: "Miraculous coincidences"? *groan* It amazes me how intelligent people can be so stupid.

Do oceans exist so that marine creatures have a place to live? Of course not. Why should the most fundamental properties of the universe be any different? If they weren't what they are, everything would be different. Maybe there's a universe out there somewhere where patterns in charged clouds of other-matter are thinking that if their universe were much different, they couldn't exist--it's properties must have been tuned just-so by the Great Kra-Tob!

Double *groan* for "Intelligent Design." Rolleyes
Evidence separates truth from fiction.
Reply
#7
If Artificial Intelligence is impossible then the existence of intelligent life on this planet other than Homo Sapiens as well as the existence of extraterrestrial intelligence capable of building a complex civilisation should be impossible as well. Homo Sapiens would then be the only intelligent species in the whole universe. But that's only one counterargument to Shank.

The obvious major counterargument is of course Homo Sapiens itself. Humans exist due to the laws of physics which govern this universe. Thus the creation of other forms of intelligence through the same laws of physics should be possible as well.

In any case when I look at Karl Sims' virtually evolved creatures:

http://www.youtube.com/watch?v=JBgG_VSP7f8

I find the moment in virtual evolution at 3 minutes 16 seconds especially interesting. Both creatures are competing for the possession of the "precious green cube". However unlike before where both creatures tried to reach the cube as fast as possible before the other creature could reach it, the genetic algorithm, governing the simulation, eventually computed a solution, where it is more advantageous to prevent or maybe even "damage" the competing creature, because then the "damaged" creature wouldn't be able to reach the cube anymore. And then the cube would belong only to the other attacking creature. In other words competition for resources always includes a solution of what we call: 'aggression'.

I wonder, if one way to achieve human level AI would be to create much more detailed virtual ecosystems with lots of different creatures which would compete for resources. The more detailed the simulation the more opportunities it would create for an evolutionary "arms race", where the "winner" would become more and more intelligent. And I think that in order to make an Artificial General Intelligence you would need a world with lots of competitors, and lots of different resources at different locations in that world. The more complicated and "random" the distribution of resources would become and the more traps and pitfalls a ("sadistic" Smile ) creator of such a world builts into the world the more intelligent the "winners" inside such a world would become. Also in order to raise the intelligence of the inhabitants of such a virtual ecosystem even more, one would have to simulate major disasters inside such a world, where lots of resources would disappear and lots of competing creatures would die. Perhaps something on the scale of the Permian–Triassic extinction event or the Cretaceous–Paleogene extinction event. Another possibility might be to move around the traps and pitfalls inside such a world to different random locations at random points in time in order to avoid "evolutionary overspecialization".

I'm sure that at some point an intelligent "species" (perhaps on the level of a Chimpanzee) would appear inside the simulation. I think that the simulation could be stopped at this point, because then we would be able to study the code of these creatures in detail. And we could also study the evolutionary history of these creatures in detail. Then we would know, how to create an AGI without evolving it first. Or maybe we will discover that the easiest way to create an AGI is always through "directed virtual evolution"?

Perhaps a Quantum supercomputer would be enough to do these kind of simulations. However scientists think that unfortunately these computers are still at least 30 years in the future.
"Hydrogen is a light, odorless gas, which, given enough time, turns into people." -- Edward Robert Harrison
Reply
#8
(01-25-2014, 07:22 PM)chris0033547 Wrote: I wonder, if one way to achieve human level AI would be to create much more detailed virtual ecosystems with lots of different creatures which would compete for resources.


You are right it is one possible way and you don't even need quantum computer to do it.
It all comes down to way you setup your virtual world. How detailed it is and how much resources it has.
As of now several virtual worlds for evolving agents are available, some even for free. But the thing is: unless said simulation is extremely detailed and "real life like" you won't get a human type intelligence.
For example two dimensional world will give you different results than 3D world, same goes for chemistry and neural network model it will employ.
For example this game has evolving creatures with primitive neural network and genetic code.

Their genetic code however is extremely limited and while it can be edited and evolved to give creatures some measure of ability it will never be enough for anything more than basic problem solving (getting food, mating, defending from predators etc.).

More complex the creature is more computational sources it requires.
But if environment is sufficiently abstract (cellular automaton for example) then the resources needed for some form of sentience are greatly reduced.
For example you don't need to simulate complete digestive tract all you need is an equation for nutrient processing.

I believe that even today it is possible to evolve virtual creature with simple intelligence, it is just question of time and resources not technology.
Reply
#9
I think there will be many ways of achieving AI, including self-evolving a-life of this sort, as well as partial emulation of human and other animal minds, and perhaps simply designing minds from scratch (although this approach isn't getting very far at the moment). The end result will be a zoo of very different kinds of AI, some as self-aware as humans, some less so, and some with even more acute awareness of its internal state.

The one option that is specifically ruled out in OA is that Whole Brain Emulation and Uploading comes first; this seems to rule out Robin Hanson's scenario of an upload-driven economy, at least on the pre-nanoswarm Earth, although it has probably happened many times elsewhere in the Terragen Sphere.
Reply
#10
(01-26-2014, 02:33 AM)stevebowers Wrote: I think there will be many ways of achieving AI, including self-evolving a-life of this sort, as well as partial emulation of human and other animal minds, and perhaps simply designing minds from scratch (although this approach isn't getting very far at the moment). The end result will be a zoo of very different kinds of AI, some as self-aware as humans, some less so, and some with even more acute awareness of its internal state.

The one option that is specifically ruled out in OA is that Whole Brain Emulation and Uploading comes first; this seems to rule out Robin Hanson's scenario of an upload-driven economy, at least on the pre-nanoswarm Earth, although it has probably happened many times elsewhere in the Terragen Sphere.

Designing mind from scratch seems almost impossible to do unless mind in question is very simple (for baseline human).
At the moment emulation or evolution seems to provide better results when it comes to realism.
In emulation and evolution I count neural networks and evolutionary algorithms.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)