The Orion's Arm Universe Project Forums

Full Version: The minds of enhanced people
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4
Modosophont

What might it be like to have some IA augmentation in the early days? Like in 1st or 2nd century AT?

I know no one has ever experienced it in RL and I'm not saying we should add this as an OA article. I'm asking for some imaginative speculation.

We know what IA people can do from the outside, but I wonder what their minds would be like on the inside?

I know that there are articles about people seeing their own thinking processes (Superiors) and transaps seeing lower beings the same way.

I'm speculating on day to day thoughts. Like when I have a stream of consciousness and have these random thoughts. Or when I use a technique where I make the time pass quicker by mentally "marking" time at longer intervels. Or when it gets very quiet my mind becomes creative and thoughts that have been hiding in the subconscious emerge in consciousness.
1st to 2nd century AT

Perhaps mild IA is like having already studied a subject, and so the answers and solutions become quicker. In this case all subjects. You would still have to learn like in school but you get it faster, you have more powerful memories. Maybe its like having OCD. Maybe IA allows to have OCD about more subjects. Now that I think about it, IA wouldn't be effective without enough education and mental training. Someone who has IA could learn faster and better than a baseline, but without education they as clueless as a baseline without education. While IA individuals have more potential, without the right motivation they are no more capable than a baseline.
An experiment

A baseline, a nearbaseline (with moderate general IA) and a superior are each instructed to complete a complex task in which they have had no training. They have only one attempt to complete this task, very limited time, and no outside assistance or technology. They work on this task separately so there is no communication between them. The conditions and task are identical.

For the sake of argument, let's say they have had the exact same training and background except for what is necessary for the nearbaseline and the superior to be born with/ have their IA.

How much more successful are the nearbaseline and the superior respectively at this task?


Also, sorry for the abstract terms. I'm trying to think of a task (or maybe a seris of tasks) that would a good example. Any ideas?
First off 'IA aka Intelligence Augmentation' is a very broad term that can encompass a lot of different things. There is also the issue of how one defines the term itself. Some people would include genetically engineered intelligence or skill enhancements, while others would limit things to cybernetic implants. Still others would limit things to cybernetics, but not require that they be implants. An argument can be made that a smartphone with a fast wi-fi connection and google can operate as a form of IA.

So, how are we defining the term and what kind of instrumentality(ies) are we considering?

Finally, given the time frame involved, I would expect early version of the tech to have more to do with augmented reality and/or very advanced user interfaces than any kind of additional or expanded mental abilities. Whether that would come before or during the time period mentioned, I'm not sure about yet.

Todd
I've got a lot to say on this topic but unfortunately no time today aside from five minutes at breakfast. Quickly though:

1) Agreed with Todd that in the very Early Timeline the types of intelligent augmentation available are going to be minor. On the genetic side there will be people getting modifications to boost their general intelligence but this isn't going to be fantastic limitless (as in the film) levels. It likely won't even be well understood, more like an ongoing painstaking analysis of the differences in genes between people of high IQ and average (trying really hard to control for all other variables). The improvements are likely to be significant but generally unnoticeable from a day to day perspective. In tests modded people may only perform 5% better on average (later 10, 15... etc.) which makes a huge difference at a population level but not really for an individual.

In terms of DNIs back then they would literally just be interfaces. A faster way of interacting with external software. That's still pretty powerful though, having a device that can slowly learn your thought patterns and turn vague half-formed intent into a well executed search term is very useful. Add apps/net connection for things like encyclopaedias, wolfram alpha etcetera and we get a highly boosted version of someone today with a well connected phone.

Later, by the technocalypse, things would have gotten more sophisticated with DNIs running programs to improve the functioning of the human brain (selectively boosting memory, aiding training in pattern recognition, inducing states of concentration etc) but still nowhere near later developments.

One interesting thing to consider though is that it probably isn't that late in the timeline that IA from a (near)baseline human point of view becomes complete. The best ways of making a sophont of that clade more intelligent will long have been mapped and the frontier of IA technology moved into the realm of superturings, and then hyperturings.

2) Generally a neb with an IA-running DNI will not be a match for an average superior. Maybe a really heavily augmented neb will have a chance but almost by definition once you start artificially pushing yourself to and beyond the edge of the bell curve you're becoming a superior anyway.
One problem with early IA would be the easy availability of useful, but inappropriate, apps. You might install a search engine in your exoself, that is popular and easy to use, but it might be unduly influenced by commercial, philosophical or political biases imposed by the designers of the app. Later in the timeline your exoself would be allowed to develop from your own mental architecture, and would reflect your own personal intellectual growth; but it is possible that the designers of any augmentation technology could have an inappropriate level of influence over the nature of the augmentation.

I suppose that would be a problem throughout OA's history. In the early timeline, augmentation technology could be crude and limited in toposophic diversity, forcing augmented minds into a number of easy-to-achieve pigeonholes, or towards certain philosophical goals; later augmentation technology could be much less limited in scope, but the influence (or ainfluence) of the Transapients would always be present in the architecture of these devices.
I think one of the biggest effects would be negative - the sense of alienation when your brain doesn't work the same way as other people's can be intense.

If you want to have sane people with Augmented Intelligence, you can't have just one, and you can't have them scattered around with no contact with each other. To develop or maintain sanity, we need peers.

Of course, that brings fear into the equation. Putting a bunch of them in contact with each other makes them an existential risk to everyone else.
I'd suggest that there being many wouldn't make them an existential risk to the current population, possibly just the reverse. (e.g. many extremely intelligent people become doctors.) However, they might pose a risk to future generations since they could compete much more effectively in almost anything. Still, if current experience is any indication, that might not be a serious problem: more intelligent people tend to have fewer offspring.
(11-12-2016, 06:35 AM)Bear Wrote: [ -> ]I think one of the biggest effects would be negative - the sense of alienation when your brain doesn't work the same way as other people's can be intense.

If you want to have sane people with Augmented Intelligence, you can't have just one, and you can't have them scattered around with no contact with each other. To develop or maintain sanity, we need peers.

Of course, that brings fear into the equation. Putting a bunch of them in contact with each other makes them an existential risk to everyone else.

I think this would again come down to what sort of IA we are talking about here. A cybernetic augment that can be easily taken on and off (think 'magic glasses' with a very fast internet connection and an advanced interface of some kind) probably wouldn't cause issues with either alienation or being a threat at all.

Genetically engineering someone for increased intelligence might cause feelings of alienation or it might not - it would depend on just what we mean by enhanced intelligence as well as how the culture around the person/people treated them. Someone who is pretty much the same as everyone else but has amazing musical talent or the ability to do complex mathematics in their head might not be inclined to feel all that alienated at all.

Until we can get a fairly clearly agreed upon definition of what we mean by 'enhanced intelligence' and what form of it we are considering, then a lot of this becomes an exercise in everyone having a different thing in mind and we spend a lot of time talking past one another.

As far as whether or not enhanced people would be a threat - why would they be? Or why would they be more of a threat than all the unenhanced people running around who can build bombs or whip of plagues in a college bio lab if they felt so inclined? I realize this is a common trope in SF (whether speaking of cybernetic or gengineered beings), but is it objectively a significant threat or does it say more about our fears of the 'the other'?

Todd
(11-12-2016, 10:57 PM)selden Wrote: [ -> ]I'd suggest that there being many wouldn't make them an existential risk to the current population, possibly just the reverse. (e.g. many extremely intelligent people become doctors.) However, they might pose a risk to future generations since they could compete much more effectively in almost anything. Still, if current experience is any indication, that might not be a serious problem: more intelligent people tend to have fewer offspring.

Having worked with doctors in a couple of past portions of my career, I'm not really of a mind that they are all that much smarter (or any smarter at all really) than anyone else. At least if we define 'smarter' as 'general problem solving ability'.

A doctor may be able to perform complex surgeries spectacularly well - and be totally helpless when it comes to changing the oil on their car. Or bad at relationships. Or barely able to balance their checkbook.

The same often applies for pretty much any other profession that we tend to culturally associate with 'superior intelligence'. Humans are often especially good in one or more areas but not so good in others.

There is also the impact of things like education, experience, and general interest or personality, all of which can impact how good someone is at something (and how bad they are at other things).

Todd
Pages: 1 2 3 4