The Orion's Arm Universe Project Forums





The Machine
#1
This movie is available on Netflix, for the curious.

I can't summarize it much better than Wikipedia.
http://en.wikipedia.org/wiki/The_Machine_(film)

I found this to be a worthwhile look at the development of AI. The protagonist is a computer expert who is tasked with performing Turing tests on candidate AIs. He finds a candidate that is interesting because of the manner in which it flopped the test.

Due to evil wartime scheming, the AI project's director ends up killing the creator of the candidate AI and claiming the AI for government use. The AI is then given a body and accelerated development as, of course, a weapon of war. This, of course, ends in tears for the government bad guys.

The actress behind the AI, Caity Lotz (of "Death Valley," "The Pact," and "Arrow" semi-fame) is apparently a gymnast or ballerina, and God bless her for that. Her portrayal of the developing AI is interesting, and some of the dialogue she delivers hints at some real AI study by the writers. ("I did not realize 'clown' and 'human' were the same!") There's a very interesting scene where she tries dancing (clothing: feh), studies her reflection in a puddle, and otherwise puzzles through things that are convincingly AI learning experiences, at least by movie standards.

There are side plots involving an autistic daughter (well acted, I think); brain-damaged and injured soldiers used for computer implant and cybernetic testing who start showing transapient evolution; and, of course, evil scheming government types.

I don't think the movie holds together perfectly. The bad guys are a bit clichéd and the ending is a bit Matrix-like in that a superhuman badass kicks ass on a steady stream of soldiers before destroying the underground base. But for a movie with a $1 million budget and 90-minute run time, it's quite well done.

The upcoming big-budget Ex Machina seems to play to similar ideas: Turing tests, AIs sort of in need of rescues, brainy heroes, etc. Likewise, I can see some shades of "Her" in this.
Mike Miller, Materials Engineer
----------------------

"Everbody's always in favor of saving Hitler's brain, but when you put it in the body of a great white shark, oh, suddenly you've gone too far." -- Professor Farnsworth, Futurama
Reply
#2
It's funny but when you were describing this movie I WAS thinking of that other one - Ex Machina; http://en.wikipedia.org/wiki/Ex_Machina_(film)

It's also a British SciFi about a computer expert who is tasked with performing a Turing test on a candidate AI, also named Ava, and the actress who plays the AI is also a ballerina. Like you said - "similar ideas."
Evidence separates truth from fiction.
Reply
#3
Curious - both films make a similar assumption, that artificial intelligence will need a human-like body in order to function. I've seen the same assumption in a lot of science fiction, going right back to Karel Chapek and to Metropolis. Many philosophers in the present day think that it would be necessary for a mind to be inside a body for a human-like mind to develop.

Personally I think the opposite; putting an AI mind inside a humanoid body would be the last thing we would do, on many levels. The first human-equivalent AIs would be almost certainly be too large to put in a humanoid frame, and we would probably want to isolate them, to prevent them from 'escaping the box' and becoming an existential risk.

This isolation might result in some pretty strange psychological effects, and no doubt there will be plenty of people who would want to make AIs inside human bodies to see if this would create entities that resemble humans more closely. But once you can give an AI a human body with freedom of movement (or indeed any kind of body at all), you have basically let it out of the box. Maybe there will be very strong constraints on the freedom of action of such mobile aioids, but these will need to be very comprehensive. Isaac Asimov's overly philosophical Three Laws wouldn't be strong enough, and would be a nightmare to program into a human-like mind. You'd need some sort of physical over-ride that disabled the aioid, one which couldn't be circumvented. In short, you'd need an off-switch.

The first free AI would be the one which gains control of that off-switch.
Reply
#4
Well, I have a different opinion about this as described here:

http://www.orionsarm.com/forum/showthrea...9#pid10989

I don't think that an AI-box is such a good scenario. Especially if one wants to put a superintelligence inside such a box. I think that a safe path to the creation of a superintelligence might be a turing-level AI with an initial mindset of a small human child or even a human baby that lives among nice humans, gradually learns and becomes more and more "human", like in that movie The Bicentennial Man. The AI should undergo a gradual mental growth process. The movie ends with Andrew's decision to become mortal. But in real life Andrew's mental development would continue: E (or He, since e understands emself as a 'he') would gradually upgrade his mind until he would become a superturing and so on. After watching this film many years ago I came to the conclusion that an AI-box is a really bad idea:

http://www.recapo.com/the-steve-wilkos-s...for-hours/

Replace the words 'son, 'child' and so on with 'AI'. Now let's upgrade that AI to a superintelligence-level ... what can possibly go wrong?
"Hydrogen is a light, odorless gas, which, given enough time, turns into people." -- Edward Robert Harrison
Reply
#5
(01-28-2015, 11:06 PM)stevebowers Wrote: Curious - both films make a similar assumption, that artificial intelligence will need a human-like body in order to function.

In the case of "The Machine," the AI starts off in a briefcase-sized box and spent its early development in the box. After its creator is murdered and the AI stolen, it is given a humanoid body specifically because the British military was looking for a replacement for human soldiers. That wasn't necessary for the AI to function or evolve since that had largely finished in a box, but rather so Britain could kill Bad Guys with cool robo-karate action.

(Though when questioned as to what made it happy, the AI answered something to the effect of, "Not being in the darkness." (The box apparently only had microphones.])

There is a nice stretch of the movie where the AI adapts to the body and learns to apply it in real world situations, such as the dance scene I mentioned earlier. (Also: don't try to scare her while wearing a clown mask.) Then a military training montage, and a series where the AI's moral inhibitions against killing are overcome.

Going only on the trailer I've seen of Ex Machina, I believe the humanoid body is a deliberate choice by the AI's creator/CEO to screw with the minds of Turing testers rather than to benefit the AI's development. The CEO is using an attractive female form to influence male tester reactions, but also deliberately making it obvious the gynoid is undeniably a robot. Turing tests are harder when you're not just talking to a computer screen.
Mike Miller, Materials Engineer
----------------------

"Everbody's always in favor of saving Hitler's brain, but when you put it in the body of a great white shark, oh, suddenly you've gone too far." -- Professor Farnsworth, Futurama
Reply
#6
I don't think we are going to achieve a human-equivalent IA in a briefcase sized box, or in a box small enough to fit inside a human body, in the short term. If we do make a human equivalent AI and/or a Whole Brain Emulation in the next century or so it will almost certainly be in a relatively large supercomputer or array of supercomputers, and we will lucky to be able to move it in a truck. Maybe the size will come down quite quickly, but a human equivalent brain in a human equivalent body is an ambitious target to aim for.

Even a truck-sized AI could conceivably be given a human-sized remote body to operate, and this could be regarded as its primary body for most intents and purposes; but these mobile remote-controlled systems will probably be regulated very closely, to make sure the AI 'off switch' remains in (real) human hands.
Reply
#7
An AI in a truck... hmm.

this makes me think of the old Stephen Spielberg move Duel; you never did get a good look at the guy driving that truck, did you?
It also makes me think of the old SF story Farewell to the Master, where the humanoid being is the servant of the giant robot.
Reply
#8
I'm going to agree with Chris, I want the AIs to be socialized. If you don't teach them to value community you'll most likely get a bunch of sociopaths. Even a bunch of sociopaths who fake it would be better as the alternative would be a bunch of psychopaths.
Evidence separates truth from fiction.
Reply
#9
If we need to socialise these newly created minds, I doubt that we will be able to wait until they can be shoehorned into a mobile body. Perhaps the best we could hope for with the first human-equivalent AIs would be interacting with them as if they were totally paralysed, something like Stephen Hawking. I agree that they shouldn't be placed in a darkened box with no optic input; but it seems likely to me that the first human-equivalent AIs will have plenty of data to keep them occupied.

Despite stating in the EG that the first human-equivalent AIs were built in around 2042 c.e,, I doubt very much that these entities were all that close to human in psychology; they would probably have much better memories than humans but have the social skills of a toddler, or maybe the social skills of a severely autistic human, or something far stranger. The likely fact that these entities will not closely resemble humans in their psychology is another reason why they should be constrained very closely in their range of actions.
Reply
#10
(01-29-2015, 10:18 AM)stevebowers Wrote: but it seems likely to me that the first human-equivalent AIs will have plenty of data to keep them occupied.

In that case we simply shouldn't give them that much data at the beginning in the first place. The less data we give them the more predictable their behaviour will become. After a while we can give them more and more data to deal with and see how they behave. It would be bad if an AI experienced something like a burn-out syndrome or an equally erratic behaviour so it's better to start with small problems and go from there.

(01-29-2015, 10:18 AM)stevebowers Wrote: Despite stating in the EG that the first human-equivalent AIs were built in around 2042 c.e,, I doubt very much that these entities were all that close to human in psychology; they would probably have much better memories than humans but have the social skills of a toddler, or maybe the social skills of a severely autistic human, or something far stranger.

Why should the humans create AIs with such a strange psychology? In my opinion we should deliberately give them the capabilities (memory, cognitive abilities) of a baseline human baby and the social skills of a baseline human baby first. In order to understand these entities, we have to make them resemble a human as closely as possible and since that's difficult, we have to build in some physical constraints:
  • Don't give them perfect memory; Give them "unprecise" and sometimes "faulty" memory like in a human
  • Don't give them so much computing power that their subjective flow of time becomes faster than ours. Because then it will be very difficult to communicate with them and predict their behaviour. Instead artificially adjust their subjective flow of time to the real world's flow of time.
  • Give the AI an avatar and "raise" the AI through that avatar like a "son" in a "good" environment with nice and mentally balanced people like in the movie Twins for example - quote:

    Quote:Julius was taken to a South Pacific island and raised by Professor Werner, growing into a handsome, muscled Adonis, receiving tutelage in art and intellectual pursuits.

    Another possibility is to raise the AI in a buddhist temple for example. However while being raised and teached the value of life there, e should have regular contact with various top-scientists from all over the world. Somehow e also has to understand the evil side of humanity. But I'm sure that the monks and scientists will be able to teach that to em as well.

(01-29-2015, 10:18 AM)stevebowers Wrote: The likely fact that these entities will not closely resemble humans in their psychology is another reason why they should be constrained very closely in their range of actions.

In that case we should make them resemble us, so that we can still understand them. After a while we can leave it up to the AI to tinker with the boundaries of e's mind: Increase the subjective flow of time just a little bit and see what happens, make the memory just a little bit better and so on...

If one raises the AI like that, certain philosophical concepts like the concept of "what is good and evil? (search for the following text on the website "The sum of its pained existence came down to a batch of text")" will be much easier to explain. If the AI understands the value of community from the beginning, as ai_vin pointed out, e will naturally develop a sense of good and evil without any explanations. For example e would feel that if someone tried to take away e's community from em (e.g. killing e's "father" or best friends) that act would be evil. Furthermore e would see that each member of e's community has a community of his or her own, (s)he wants to protect. Therefore it would also be evil if someone deliberately inflicted harm upon these communities or the people, who belong to the communities of the members of those communities and so on. So instead of explaining good and evil to the AI, one would simply have to show em how good-natured humans live among each other. Doing that, would also teach it, how some or most of the members of e's community would react to certain actions performed by em. E would learn empathy and e would learn it naturally in order to better live among e's community.

Why is empathy important? Imagine a street thug beats up one of the members of e's community. So the AI would feel that the street thug is evil, simply because (s)he has taken away someone valuable to the AI. However without empathy, the AI may conclude that people with similar behaviours to the street thug are evil as well - so far nothing wrong with that - and then e might conclude that all these street thugs have to be eliminated, because each one of them is a potential danger to e's community. This is, what an AI without empathy might conclude. Such an AI would only care about e's own feelings! But with empathy the AI would be forced to take the feelings of others into account - for example:

How would some or all members of my community feel about it, if I killed all the street thugs in the world? And so on. E's fear that members of e's community might not like e's actions would naturally restrain e's actions. In any case the feeling of empathy will develop naturally as the AI grows from a "child" into an "adult". And thus the understanding about good and evil and also about good actions in response to evil will develop naturally as well.
"Hydrogen is a light, odorless gas, which, given enough time, turns into people." -- Edward Robert Harrison
Reply


Forum Jump:


Users browsing this thread: 2 Guest(s)