The Orion's Arm Universe Project Forums





Robot rights
#11
(06-12-2017, 09:45 AM)Rhea47 Wrote: Super AI is a bad idea. I say stay away from it unless you can transfer a human mind to a computer so it understands that its immoral to turn humans into paperclips.

Morality is just a made up social control mechanism, like Santa Claus or the Tooth Fairy.

And what's wrong with turning humans into paperclips? Huh

Todd
Reply
#12
(06-12-2017, 09:45 AM)Rhea47 Wrote: Super AI is a bad idea. I say stay away from it unless you can transfer a human mind to a computer so it understands that its immoral to turn humans into paperclips.
In OA the technology for transferring a human mind into a computer arrives much later than the advent of artificial general intelligence (AGI). I suspect this will happen in real-life too, since the uploading of a human mind appears to be a very complex operation, while there are probably many routes to AGI.
https://en.wikipedia.org/wiki/Artificial...telligence
One promising route towards AGI is the low-level mapping of the human mind/brain system, so that the result is a kind of generalisation of a human mind; this may be the best option we have for introducing a human-like perspective into AGI, although I should point out that humans can be cruel and unreliable too, so we may not be the best models after all.
Reply
#13
(06-12-2017, 10:57 AM)Drashner1 Wrote: Morality is just a made up social control mechanism, like Santa Claus or the Tooth Fairy.

And what's wrong with turning humans into paperclips?  Huh

Specific moral systems can indeed be argued to be a social control mechanism (i.e. those imposed on populations by an oppressive class) but morality itself is an evolved trait. Various studies have shown that primate groups have an innate tendency to revert to fundamental game theoretic behaviours such as tit-for-tat, suggesting that nature has selected for these traits as they increase population fitness. But given that morality isn't absolute this opens us up to the scary notion that there could be evolved moral systems quite distinct from our own that also increase population survival chances, perhaps even in excess of our own. IMO this is a commonly overlooked part of Peter Watt's work Blindsight. People hold up that book for its awesome description of aliens that are quite alien but a scarier subtext is that the universe is filled with life that is better at us at surviving and prospering, and their methods are brutal from our perspective.

In the case of robot rights in OA aside from the fact it wasn't clear AI were sentient for a long time this issue could have played a role in the split between pro-human and ahuman groups. The former possibly had innate social instincts similar to our own giving rise to relatable altruistic behaviours. The latter could be quite different, making living along side them difficult or impossible. They may be sociopathic from a human perspective.

It's almost lovecraftian really. In a conflict with such entities one could only win by sacrificing one's humanity; literally. Re-engineering one's brain to remove/radically alter our innate moral instincts (which admittedly can manifest in a hugely diverse range of behaviours). It's also why I really like the importance of empai/empaths in OA, even if it's something that doesn't get a lot of attention. The idea that a group would take a key trait that increases population survival and enhance it. Really cool.
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Reply
#14
(06-12-2017, 11:39 PM)Rynn Wrote: Specific moral systems can indeed be argued to be a social control mechanism (i.e. those imposed on populations by an oppressive class) but morality itself is an evolved trait. Various studies have shown that primate groups have an innate tendency to revert to fundamental game theoretic behaviours such as tit-for-tat, suggesting that nature has selected for these traits as they increase population fitness.

I think we're defining 'morality' differently. In my experience, people tend to speak of 'morality' in terms of the idea that there are objectively true ideas of 'right' and 'wrong' - like there is a giant invisible scoreboard in the sky that says (for example) that genocide is objectively 'wrong'. However, I would argue that there is no such thing as 'right' or 'wrong' in any objective sense in this context. Killing millions or billions of people may be something we find distasteful in the extreme, but the universe doesn't 'care' in any objective sense, nor do their deaths 'matter'. The Earth will still turn, the Sun will still burn, and the universe won't care.

Going back to the issue of morality as social control mechanism - by deeply conditioning people from an early age to react emotionally to some things as 'right' or 'wrong' without really thinking about it (a common way that morality works in my experience), it is possible to get them to engage in socially desired behaviors without constant reinforcement - so a sort of 'autopilot' social control mechanism that can be made to operate regardless of any objective/experimental information for or against what it advocates or what future generations might think about the same subject.

Anyway.

(06-12-2017, 11:39 PM)Rynn Wrote: But given that morality isn't absolute this opens us up to the scary notion that there could be evolved moral systems quite distinct from our own that also increase population survival chances, perhaps even in excess of our own. IMO this is a commonly overlooked part of Peter Watt's work Blindsight. People hold up that book for its awesome description of aliens that are quite alien but a scarier subtext is that the universe is filled with life that is better at us at surviving and prospering, and their methods are brutal from our perspective.

In the case of robot rights in OA aside from the fact it wasn't clear AI were sentient for a long time this issue could have played a role in the split between pro-human and ahuman groups. The former possibly had innate social instincts similar to our own giving rise to relatable altruistic behaviours. The latter could be quite different, making living along side them difficult or impossible. They may be sociopathic from a human perspective.

It's almost lovecraftian really. In a conflict with such entities one could only win by sacrificing one's humanity; literally. Re-engineering one's brain to remove/radically alter our innate moral instincts (which admittedly can manifest in a hugely diverse range of behaviours). It's also why I really like the importance of empai/empaths in OA, even if it's something that doesn't get a lot of attention. The idea that a group would take a key trait that increases population survival and enhance it. Really cool.

It might be interesting to try to 'design' other systems of 'morality' that different races and clades might use in the setting. And give thought to how they all get along in the whole structure of Terragen civ. For that matter, we shouldn't assume that the various human civs in the setting share our ideas of morality, either in whole or in part.

Todd
Reply
#15
(06-13-2017, 10:53 AM)Drashner1 Wrote: I think we're defining 'morality' differently. In my experience, people tend to speak of 'morality' in terms of the idea that there are objectively true ideas of 'right' and 'wrong' - like there is a giant invisible scoreboard in the sky that says (for example) that genocide is objectively 'wrong'. However, I would argue that there is no such thing as 'right' or 'wrong' in any objective sense in this context. Killing millions or billions of people may be something we find distasteful in the extreme, but the universe doesn't 'care' in any objective sense, nor do their deaths 'matter'. The Earth will still turn, the Sun will still burn, and the universe won't care.

We definitely agree in spirit if not in letter. What you're describing here I'd call "absolute morality", the idea that morals are some sort of external phenomenon to humanity (often touted as being universal or spiritual truths). I don't see any evidence that that exists. But it could be argued that objective morals exist given that innate tendency for specific social behaviours is an observed biological trait. Of course this in no way means they are "correct" or anything of the sort.

(06-13-2017, 10:53 AM)Drashner1 Wrote: Going back to the issue of morality as social control mechanism - by deeply conditioning people from an early age to react emotionally to some things as 'right' or 'wrong' without really thinking about it (a common way that morality works in my experience), it is possible to get them to engage in socially desired behaviors without constant reinforcement - so a sort of 'autopilot' social control mechanism that can be made to operate regardless of any objective/experimental information for or against what it advocates or what future generations might think about the same subject.

Anyway.

Terragen understanding of developmental psychology and sociology is a lot more advanced than ours. It's possible they might be able to deeply ingrain moral systems in a society to subtly exist over long periods of time. This would be very important for generation ships IMO. Aside from overlooking the economic and ecological aspects of such a craft I think people often overlook the social engineering challenges. If your aim is for the craft to survive intact to the destination system then you have to consider what sort of culture and social institutions would be likely to last and keep the inhabitants alive. Exactly what that would be I don't know, but I suspect it would be very different to the assumptions of the stereotypical libertarian space cadet. You really don't want a system where it's possible to wake up and find someone now has a monopoly on the oxygen supply! Or that a revolution has punched a hole in the hull and there's not enough people alive to sustain the ark.

(06-13-2017, 10:53 AM)Drashner1 Wrote: It might be interesting to try to 'design' other systems of 'morality' that different races and clades might use in the setting. And give thought to how they all get along in the whole structure of Terragen civ. For that matter, we shouldn't assume that the various human civs in the setting share our ideas of morality, either in whole or in part.

That would be interesting. A start would be to choose alien axioms and work outwards from there. For example: for a clade that has no concept of individuality and thus things like "I think therefore I am" make no sense.
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Reply
#16
(06-13-2017, 10:50 PM)Rynn Wrote: We definitely agree in spirit if not in letter. What you're describing here I'd call "absolute morality", the idea that morals are some sort of external phenomenon to humanity (often touted as being universal or spiritual truths). I don't see any evidence that that exists. But it could be argued that objective morals exist given that innate tendency for specific social behaviours is an observed biological trait. Of course this in no way means they are "correct" or anything of the sort.

If we're defining biological behaviors as a form of morality in this context, then agreed.

(06-13-2017, 10:50 PM)Rynn Wrote: Terragen understanding of developmental psychology and sociology is a lot more advanced than ours. It's possible they might be able to deeply ingrain moral systems in a society to subtly exist over long periods of time.

True - in fact, I'd suggest that 'memetics' as it is described within the setting (and possibly 'ontology' to some degree) would involve this sort of ingrained moral systems in some form. This might also explain the differences between the various empires and why 'defecting' from one empire to another might be one of the few things left that causes social and interpersonal stress since doing so presumably involves or implies a rejection of ones 'home' morality in favor of another. It might also mean that moving from one empire to another involves some degree of conditioning (possibly quite subtle). This might also tie into the issues behind the Version War to some degree, although I'm a bit fuzzy on that aspect of things.

(06-13-2017, 10:50 PM)Rynn Wrote: This would be very important for generation ships IMO. Aside from overlooking the economic and ecological aspects of such a craft I think people often overlook the social engineering challenges. If your aim is for the craft to survive intact to the destination system then you have to consider what sort of culture and social institutions would be likely to last and keep the inhabitants alive. Exactly what that would be I don't know, but I suspect it would be very different to the assumptions of the stereotypical libertarian space cadet. You really don't want a system where it's possible to wake up and find someone now has a monopoly on the oxygen supply! Or that a revolution has punched a hole in the hull and there's not enough people alive to sustain the ark.

Agreed - note also that similar social engineering would have application for space habs below a certain size - think places like Haloist colonies and the like. Also, the various interstellar megacorps might have/have had some form of 'corporate religion' or at least 'corporate morality' to help make themselves work over interstellar distances.

A potentially very interesting aspect of this is what happened in the early days when this sort of thing was new, and presumably much less capable or durable, and a given social system went off the rails while the ship was in deep space or the like.

(06-13-2017, 10:50 PM)Rynn Wrote: That would be interesting. A start would be to choose alien axioms and work outwards from there. For example: for a clade that has no concept of individuality and thus things like "I think therefore I am" make no sense.

Hm. We don't really do as much with group minds as we potentially could. Perhaps the 'morality of a hive mind' would be an interesting place to start from with this sort of thing? Although the exact from of group mind might need to be defined a bit first.

Todd
Reply
#17
(06-13-2017, 10:50 PM)Rynn Wrote: Terragen understanding of developmental psychology and sociology is a lot more advanced than ours. It's possible they might be able to deeply ingrain moral systems in a society to subtly exist over long periods of time.

I know is a lot more advanced, but if its more advanced that we can understand, how we can talk about that on the EG?

(06-14-2017, 03:02 AM)Drashner1 Wrote: True - in fact, I'd suggest that 'memetics' as it is described within the setting (and possibly 'ontology' to some degree) would involve this sort of ingrained moral systems in some form. This might also explain the differences between the various empires and why 'defecting' from one empire to another might be one of the few things left that causes social and interpersonal stress since doing so presumably involves or implies a rejection of ones 'home' morality in favor of another. It might also mean that moving from one empire to another involves some degree of conditioning (possibly quite subtle). This might also tie into the issues behind the Version War to some degree, although I'm a bit fuzzy on that aspect of things.

So, what is the current canon about ontologies and memetics?  Should i say in the article of ontology that ontologies and Memetics are very different things and all sophonts in OA can tell the difference or should i say modosophont can confuse one with another because of points in common.

(06-14-2017, 03:02 AM)Drashner1 Wrote: Agreed - note also that similar social engineering would have application for space habs below a certain size - think places like Haloist colonies and the like. Also, the various interstellar megacorps might have/have had some form of 'corporate religion' or at least 'corporate morality' to help make themselves work over interstellar distances.

Corporate religions is something i think can be very common.

(06-14-2017, 03:02 AM)Drashner1 Wrote: Hm. We don't really do as much with group minds as we potentially could. Perhaps the 'morality of a hive mind' would be an interesting place to start from with this sort of thing? Although the exact from of group mind might need to be defined a bit first.

Todd

We still need to define a lot of things in the setting  Big Grin
Reply
#18
(06-14-2017, 04:19 AM)Avengium Wrote: I know is a lot more advanced, but if its more advanced that we can understand, how we can talk about that on the EG?

By speaking carefully in general terms. We don't know exactly how a fusion reactor or a starship would work, but we can still describe them. In this case we might say things such as:

Over the course of its long history, Terragen civilization has refined the disciplines of development psychology and sociology to a high art. Using techniques refined over thousands of years, Terragens mind-techs can reliably instill artificial social structures (including morality, group psychology, etc.) that are optimized for a particular set of circumstances or conditions. Applications include creating stable societies on worldships or in small deep space habs, as well as in the most diverse and cosmopolitan of communities where hundreds of radically different sophont species may live and work together in peace and relative harmony.

Or something like that. This is just a rough bit of brainstorming at the moment.

(06-14-2017, 04:19 AM)Avengium Wrote: So, what is the current canon about ontologies and memetics?  Should i say in the article of ontology that ontologies and Memetics are very different things and all sophonts in OA can tell the difference or should i say modosophont can confuse one with another because of points in common.

There really isn't a current canon about these things or how they relate - not beyond the current EG articles anyway.

My personal opinion is that an ontology would be something analogous to the constitution of a nation-state - a fairly stable structure of thinking laying out the foundations and core principles on which a society is based - while memetics are a tool for manipulating beliefs and attitudes in the shorter term. To extend the analogy - memetics might be somewhat equivalent to laws or the process of lawmaking - they can take many forms but are expected to still fall within the framework of the pre-existing constitution - or ontology in this case.

But that's just my opinion.

(06-14-2017, 04:19 AM)Avengium Wrote: Corporate religions is something i think can be very common.

At some times and places within the setting, very much so.

(06-14-2017, 04:19 AM)Avengium Wrote: We still need to define a lot of things in the setting  Big Grin

Yup - fun ain't it? Big Grin

Todd
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)