The Orion's Arm Universe Project Forums





Blindsight and Echopraxia by Peter Watts
#1
I just finished Echopraxia by Peter Watts and thought I'd write up a little review/recommendation here. If you like OA I don't see why you wouldn't love Echopraxia.

So what's it about? Echopraxia is the sequel (or perhaps sidequel? The two stories take place at the same time in different locations) to Blindsight, a novel published in 2006 that was a Hugo, JW Campbell and Locus nominee. The names of both books refer to bizarre neurological conditions, blindsight is the phenomenon where people insist they are blind even though they can see just fine (and will even be able to guess correctly things in their visual field whilst still claiming to not see) and echopraxia is the unconscious/controllable mimicking of someone elses physical motions. The reason for the names is that Watts uses both these books to explore mentality in a way few authors attempt and even fewer do well.

First the setting: it's the late 20th century and the world is going through what could be called a singularity. It's a very transhumanist book (but has a unique spin), humans have begun to augment themselves with mental implants, AIs exist but most of them are so different to humans it's a question of if they are even intelligent and vampires have been resurrected by science to run the stock market. Yes in this novel there are vampires, but they aren't anything like the twilight or buffy varieties. These have an evolutionary basis, in the novel's history hundreds of thousands of years previously were a race of hominids that evolved to hunt humans. By evolving this way they gained superior strength, speed, nightsight and an ability to hibernate for years on end (so as to not over predate the humans). The most interesting part of them is that they are an intelligent species that evolved to hunt intelligent species. That makes them very different to us mentally, in fact they are far beyond humans as they, by necessity, needed to model their prey incredibly well in their minds. They're up there with the AI for not being able to be understood and they're a great biological transapient to use OA terms.

Blindsight focuses around Siri, a young man who is a synthesist. This odd group of people fill a social role as they are able to explain concepts that only AI and vampires can understand to normal humans (as best as possible) without having to understand the concepts themselves. In Echopraxia one synthesist refers to it as just intuitively knowing what to say without really understanding any of it. They're created by augmentation and Siri is a good protagonist for us. Siri and a group of very different beings (a pacifist soldier, a scientist whose lab is part of his sensorium, a linguist with multiple minds in one body, a vampire) are sent on a ship to investigate a possible alien installation light months outside the solar system. The entire trip is an exploration into what it means to be human and whether or not consciousness (our most beloved trait) is a blessing or a hinderence.

Echopraxia occurs at the same time but back in the solar system. It focuses on a baseline human who lives alone in a tent in the desert, one day he is swept up in a series of events that see him heading to Icarus, a space station around the sun that produces antimatter and transports it using quantum teleportation to Earth. Similar to the story of Blindsight it focuses on what it means to be human and whether or not humans are really cut out for life in the universe. As the protagonist is a baseline human the take on it is different to Blindsight.

If that didn't get you interested then here's a grab-bag list of cool things in both the stories Smile

- Alien chatbots
- Superintelligent group mind of monks
- Really cool/realistic space ships
- Tornados used as power generators and weapons
- Zombie soldiers that switch off their conscious minds for better combat performance
- A virtual world where baselines go to live when they can't cope with reality

Finally if I had to say one thing about this duo it would be that it does a fantastic job of showing rather than telling what a superintelligent entity would be like. You really get a sense that the vampires, the AI and the group minds are so beyond humans that it's terrifying. Their actions are bizarre, confusing, seemingly stupid but get results in fantastic ways.

So yeah, if you want to read some great transhumanist fiction that will really make you think, buy these books.
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Reply
#2
I read Blindsight several years ago. It did have some fun ideas that I subsequently found portrayed in Orion's Arm. But I always meant to email Watts and gripe about his portrayal of the super-intelligent-but-non-sapient vampires and computers.

Watts makes the case that consciousness is a processing burden. Fine, I can buy that. However, the problem I had is that his super-intelligent entities is that they supposedly weren't conscious / aware of self despite demonstrating every ability to be so. They were quite capable of thinking about complicated environments with many other entities / objects in the environment, and capable of manipulating themselves with respect to those others. They had to be aware that "self object" differed from the other objects because, you know, they just couldn't think and command other objects/people to do things they way they could their own bodies. They envinced survival behaviors to keep self-object safe in a way not applied to other objects.

So they were clearly thinking about themselves differently than everything else around them, and were incredibly smart, but couldn't/wouldn't make a recursive step to think about themselves thinking about themselves? You'd need some active sapience suppression system.
Mike Miller, Materials Engineer
----------------------

"Everbody's always in favor of saving Hitler's brain, but when you put it in the body of a great white shark, oh, suddenly you've gone too far." -- Professor Farnsworth, Futurama
Reply
#3
The active suppression phenomenon in this case is evolution. Watts takes the currently still unconfirmed concept of epiphenomenology (which posits that conscious thought is not the executive agents, decisions are made unconsciously then milliseconds later the conscious mind is informed and made to think it made the decision) and further posits that under different selective pressure discrete unconscious processes can outperform conscious ones. The start of echopraxia has a good analogy which applies to toposophy in OA. It goes along the lines of saying that consciousness allows you to climb a great hill, the further you climb the farther you can see. Then it gets to a peak only to observe a far higher mountain on the other side of the plain, the problem is to get there you have to descend the current summit and lose much of what you are.

The end of echopraxia contains a lot of notes that might show you more of watts thinking. Other than that why not email him Smile I'd be interested in his reply.
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Reply
#4
There's a big chunk of supplementary material available on his blog (and stuff linked from there) that is also quite interesting, if you've not already seen it.

http://www.tor.com/2014/07/29/the-colonel-peter-watts/
http://rifters.com/echopraxia/recruiter.htm
http://rifters.com/echopraxia/enemywithin.htm
http://www.rifters.com/real/progress.htm
http://www.rifters.com/crawl/?p=5875
http://www.rifters.com/real/shorts/Peter...dnotes.pdf

Probably more, too.
Reply
#5
Quote:However, the problem I had is that his super-intelligent entities is that they supposedly weren't conscious / aware of self despite demonstrating every ability to be so. They were quite capable of thinking about complicated environments with many other entities / objects in the environment, and capable of manipulating themselves with respect to those others. They had to be aware that "self object" differed from the other objects because, you know, they just couldn't think and command other objects/people to do things they way they could their own bodies. They envinced survival behaviors to keep self-object safe in a way not applied to other objects.
One way to break the bond between 'self object' and 'external objects' is to give the entity control of several, or numerous, active devices; if the entity controls a large crew of maintenance robots, for example, and suffers minimal hardship if one or more of these devices is damaged or destroyed, then there would be no sense of self associated with them. The central processing system that controls these devices would be just one element among many that is controlled and monitored by the entity, so is assigned no special significance.

To take this disassociation still further, the care and maintenance of the central processor could conceivably be assigned to another entity, even a human; the original entity need not be involved in self-preservation at all. Perhaps, if motivated to do so, the entity in question might somehow determine where its CPU is located, and the associated off-switch; but if the entity is not motivated in this way, then the question need not arise.
Reply
#6
(06-04-2015, 07:26 PM)stevebowers Wrote: One way to break the bond between 'self object' and 'external objects' is to give the entity control of several, or numerous, active devices; if the entity controls a large crew of maintenance robots, for example, and suffers minimal hardship if one or more of these devices is damaged or destroyed, then there would be no sense of self associated with them. The central processing system that controls these devices would be just one element among many that is controlled and monitored by the entity, so is assigned no special significance.

To take this disassociation still further, the care and maintenance of the central processor could conceivably be assigned to another entity, even a human; the original entity need not be involved in self-preservation at all. Perhaps, if motivated to do so, the entity in question might somehow determine where its CPU is located, and the associated off-switch; but if the entity is not motivated in this way, then the question need not arise.

However these conditions can hardly occur naturally. The vampires from Blindsight have still just one body and have to take care of their own needs. Therefore it seems strange that they would have no consciousness.

Ability to think about self and being aware of self is pretty much must have for any being that has to ensure its own survival.
In addition thinking about thinking, so to speak, gives one further advantage. If you can describe, assign qualities to and judge your mind process, you also can change them. I think pretty much every one of us did it at some point.

You analyze one aspect of you mind and judge it, compare it to analogous aspect of mind of someone else and decide whether it is or not desirable. Than you start working on changing it or you just move on.

For example: I always had trouble fitting in, so I thought about how I view new people I meet. I found out that my model of human being, sort of basic template of characteristics, is very flawed. So I altered it, practiced on video games, than tried it out in real life and it worked.
Reply
#7
Aside from Watt's setting where it can occur naturally (and he hints at a possible pathway in which the lower latency of non-conscious minds is an advantage) I don't see why it couldn't be designed. In OA the regular smart entities than run all the automation could just be collections of subsentient entities organised in a manner in which they assess and alter each other, with no global consciousness.
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)