The Orion's Arm Universe Project Forums





Kevin Kelly: No Singularity
#2
Limited time atm, but to provide a quick-n-dirty response to the points raised in the article (which I only had time to skim - will try to do a deeper read later), starting from the list the author provides of counter-points:

Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.

Fair enough, but also rather irrelevant. Call it an intelligence that is more successful at surviving in the universe and manipulating it to its purposes than we are. An intelligence that out performs us in many/most/all dimensions that we operate in might also be considered 'smarter than human'. Calling what it's doing something other than 'smarter than human' may be a valid distinction up to a point, but care must be taken to avoid sinking into semantic irrelevancy.


Humans do not have general purpose minds, and neither will AIs.

See above - this is also somewhat playing games with semantics or non-specific or clearly defined terminology to try to make a defined point. More to the point, whether or not an AI has a 'general purpose mind' is less relevant than if the mind it has can out perform the human mind at the tasks humans do, or (more generally) can operate in this universe better than we can.


Emulation of human thinking in other media will be constrained by cost.

Cost of technology drops constantly (admittedly there may eventually be a floor on that) and the author somewhat defeats his own point by again making the claim that AI built on non-wetware materials will not result in a human-like mind - which is rather irrelevant as mentioned above and to the whole question of the Singularity in general. In this case he's making an argument that is a close cousin to the argument that Hamlet wasn't written by Shakespeare, but by a different man of the same name. Put another way: If we create big AI minds that proceed to do the sort of things predicted by various thinkers about the Singularity, such as wiping out/remaking humanity, taking the Earth apart to make a Matrioshka brain, and going on to reshape the entire universe, the fact that they don't think anything like we do will be utterly irrelevant.

To put this in OA terms - we routinely say that both early AIs and the Y11k transapients and archailects think very differently from humans to one degree or another - it doesn't stop them from doing very big things or being able to kick our asses whenever they please.


Dimensions of intelligence are not infinite.

Irrelevant - dimensions of intelligence don't need to be infinite to produce a singularity - they just need to be either greater than what humans can do or better at manipulating the universe than we are or both - from our perspective the end result may still be the same. Put another way - a car does not have infinite weight - but that doesn't really matter to the ant that it runs over.


Intelligences are only one factor in progress.

The author seems to be saying that AI won't result in instant and infinite progress - which is true but irrelevant. If the progress is a hundred or a thousands times faster than now, then things are going to seem to be moving at a dizzying pace by human standards. If the rate gets fast enough (or strange enough) then it may become something that people living before the development of that rate of progress can't even imagine and which human intelligence can't keep up or understand. Which is basically the classic definition of a singularity.

Finally, I would say that this author (As so many do) seems to be ignoring the definition of Singularity as originally described by Vinge and focusing more on the version described in some SF or by some other thinkers inspired by Vinge.

Vinge never claims that 'infinite progress' needed to create a singularity or that you need minds of infinite power. You just need minds that are 'smarter' than humans. Feel free to substitute 'better at dealing with the universe' for 'smarter' if that term bothers you.

Finally Finally - in reading the essay I noticed several spots where the author made unfounded/unsupported assertions or assumptions at least equal to the ones he claims to be refuting and with no more evidence to support them. This may be an oversite on his part or a classic example of how difficult it can be to discuss this kind of thing without first coming to a mutually agreed upon set of definitions and terminology. Because the language itself can be tricky.

My 2c worth,

Todd
Reply


Messages In This Thread
Kevin Kelly: No Singularity - by tmazanec1 - 05-02-2017, 12:51 AM
RE: Kevin Kelly: No Singularity - by Drashner1 - 05-02-2017, 03:06 AM
RE: Kevin Kelly: No Singularity - by stevebowers - 05-02-2017, 05:12 AM

Forum Jump:


Users browsing this thread: 1 Guest(s)