The Orion's Arm Universe Project Forums
Google's AI Has Learned to Become "Highly Aggressive" in Stressful Situations - Printable Version

+- The Orion's Arm Universe Project Forums (https://www.orionsarm.com/forum)
+-- Forum: Offtopics and Extras; Other Cool Stuff (https://www.orionsarm.com/forum/forumdisplay.php?fid=2)
+--- Forum: Real Life But OA Relevant (https://www.orionsarm.com/forum/forumdisplay.php?fid=7)
+--- Thread: Google's AI Has Learned to Become "Highly Aggressive" in Stressful Situations (/showthread.php?tid=4010)



Google's AI Has Learned to Become "Highly Aggressive" in Stressful Situations - extherian - 02-01-2019

This article (click HERE) demonstrates how AI minds independently arrive at undesirable human behaviours like greed and aggression under certain circumstances.

In particular, this passage stands out:

Quote:"...when the researchers used smaller DeepMind networks as the agents, there was a greater likelihood for peaceful co-existence. But when they used larger, more complex networks as the agents, the AI was far more willing to sabotage its opponent early to get the lion's share of virtual apples. The researchers suggest that the more intelligent the agent, the more able it was to learn from its environment, allowing it to use some highly aggressive tactics to come out on top.

This model ... shows that some aspects of human-like behaviour emerge as a product of the environment and learning...less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself."

I am not using this article to prove that AI will be hostile and want to kill everyone. Rather, it illustrates that given the right incentives, even a perfectly rational being can conclude that coercion rather than cooperation is in its own best interest.


RE: Google's AI Has Learned to Become "Highly Aggressive" in Stressful Situations - Drashner1 - 02-01-2019

Agreed. Although I also think it's interesting that when cooperation offered greater rewards, the AIs started cooperating (apparently - the article seemed rather more focused on the competition/conflict element - which reveals a bit of bias on the part of the authors).

I would submit that this would suggest that AIs would benefit from being given a larger sense of the world and other people around them. Whether a sense of 'socialness' (empathy?) or of community or the like. Whether that is taught or programmed would likely depend on the way the AI is created.

Todd