Posts: 2,110
Threads: 129
Joined: Sep 2012
http://xkcd.com/793/
Just had to share...
Clever people, especially those with a background in the 'harder' sciences, often fall into this trap. If they're lucky then all that happens is that they get laughed at by people who really know the field. If they're unlucky, they're arrogant enough to actually act on their analysis. I'd laugh harder if it weren't that I've done this kind of thing myself. :-)
Stephen
Posts: 11,765
Threads: 454
Joined: Apr 2013
11-27-2014, 08:50 AM
(This post was last modified: 11-27-2014, 08:51 AM by stevebowers.)
I would hazard a guess that the first AIs will have a similar outlook. Social sciences are soft, but they are not easy, and an entity with a logical outlook might fail to appreciate that simplfying the problems ('imagine a
spherical cow') would not provide useful answers to social and political problems.
Posts: 16,273
Threads: 739
Joined: Sep 2012
Assuming that the first AIs are any more logical than humans. The idea that machine minds must be super logical and struggle with things like emotions is a longstanding SF trope that actually has essentially zero evidence to support it.
True, RL computers are very logical in their operations, but as things like fuzzy logic and similar processes are developed they are getting better at the 'softer' side of things. And a full on human equivalent AI would probably be built to closely model the structure and operation of a human brain (at least the first models anyway) and so also contain the wherewithal to feel and be illogical just like a human.
Todd
Posts: 11,765
Threads: 454
Joined: Apr 2013
11-27-2014, 09:50 PM
(This post was last modified: 11-27-2014, 09:56 PM by stevebowers.)
Rather than assuming that the first AIs would be hyper-logical like Spock (or Data) I tend to follow Anders' ideas and expect that they would be
hyperautistic, or psychologically different in some other way. Human normal (and normative) psychology is the product of millions, or billions, of years of evolution, and I doubt that we could get anywhere near it on the first attempts.
However an AI which was competent could still be useful, even if it had limited resemblance to a human mentality. In fact, given certain characteristics (improved memory and attention span, and access to a range of analytical tools) a non-human AI could be very useful indeed.