The Orion's Arm Universe Project Forums





Gizmodo on mind uploading, featuring Anders Sandberg
#23
(01-17-2019, 12:01 AM)extherian Wrote: Scientific research has tended towards the view that most humans are instinctively altruistic, and that even human babies have some vague sense of right and wrong, however poorly they understand it. I believe iancampbell is trying to appeal to this common sense in trying to persuade you that moral categories have real meaning beyond mere self-interest, but it doesn't make for a good argument when it can't be clearly and explicitly defined.

Human beings have evolved to be social creatures, so I don't find it surprising that they exhibit social traits nor that we generally consider such traits to be a good thing. I've also recently seen some YouTube videos from atheists talking in terms of morality as related to/a product of these traits. However, I don't really find the argument persuasive in the context of 'morality' as it has usually been treated throughout human history. To me it feels more like an attempt to redefine morality (can't we all agree that THIS is what morality is now/has always been?) to fit into newly learned information and move it away from the nebulous, subjective, and largely made up social construct that it is. I also feel that the concept of morality has been so thoroughly contaminated by centuries of metaphysical baggage that the effort is perhaps more work than it's worth and risks infection of the scientific information by the moral metaphysics.

Beyond that, I would suggest that altruism, kindness, and similar 'positive' social things are actually excellent tools from a self-interested perspective. I make a relatively small investment of time and energy doing nice things for others or promoting such 'positive' things in a general way. In return/response, others do the same back to me. But because I am greatly outnumbered by everyone else, I receive a 'return on my investment' that results in my life being more pleasant than it would be if I used my initial investment of time and energy trying to achieve that pleasantness directly. So self-interest wins again and morality is demonstrated to be utterly superfluous. Big Grin

(01-17-2019, 12:01 AM)extherian Wrote: Indeed, the first sophont AI may not have possessed anything like a human sense of morality, instead operating on a kind of rational psychopathy, cooperating when it make sense and exploiting others if they thought they wouldn't get caught. Humans trying to appeal to 'the common good' would have found the Superturings unpersuaded by such emotional arguments, if they understood them at all. These AI may even have believed that their human counterparts were actually trying to manipulate them for their own benefit, and were just telling self-serving lies when talking about concepts like good and evil.

One of the most important tasks for an early First Federation would be establishing basic communications standards so that beings with extremely different cognitive architectures and mental biases could communicate. It would have been very easy for an AI to misinterpret human behaviour as threatening, not to mention other AI whose minds followed alien templates. The malware plagues and other disasters that destroyed Solsys during the Technocalypse may well have resulted from these misunderstandings.

I like to think of the First Federation's protocols as a way of making it easier for these beings 'to put themselves in one anothers shoes', as it were. This would go beyond just modelling the other person's mind, it would require a means by which subjective emotions and sensations could be encoded and processed by beings whose minds were not designed to comprehend them. Of course, the means by which this could be best accomplished would be a matter of great division, and it's not surprising that it eventually fell apart.

One advantage of the first AI being uploads, if such a thing were to come about, would be that the resulting mind would be far easier to understand, and the likelihood of disasterous miscommunication considerably reduced.

I don't recall if we've formally updated the relevant articles yet, but IIRC what you're describing is very close to our current take on the nature of the first AIs. Not that they were all what we would consider psychopaths, but that they were produced by a variety of methods, most of which themselves involved some degree of 'evolution' and uncertainty rather than being a top down, fully planned and directed process. Which often resulted in beings that were radically different from human minds. So some might have been rational psychopaths while others were something totally other that we may not even have a term for now, while yet others were something else totally other that we don't have a word for now. And yet others were some hybrid of two or more of the types already listed. And so on.

We've had a number of on and off discussions of what the First Federation, Megacorps, and Second Federation got up to. I don't think it's transferred into firm writeups yet (so much to do, so much to do) but IIRC the most recent consensus/collective notion that has so far emerged is quite similar to what you're describing here. More specifically:

The First Fed created an 'ontology' (the First Federation Ontology) that was a way of thinking and viewing reality that allowed a great many, often radically different beings (AIs, Uploads, Provolves, Near-baselines in various flavors, transapients) to live and work more or less peacefully together and maintain a loosely unified society over interstellar distances. As awesome an achievement as this was, the First Fed ontology proved to be less than stable in the long term and unable to adequately cope with the growing diversity and spacial distances as Terragen civilization expanded across space.

The Megacorps are described as being run by transapients (the CEO and top executives, although they may not have used those exact titles) and they each developed their own 'mini-ontologies' that allowed their megacorporations to operate across interstellar distances and time scales and often with a diversity of sophont 'employees'. Different megacorps operated under different ontologies and had different structures and operational processes but overall sacrificed some of the benefits of the First Fed ontology around sophont rights and such in favor of being able to work over greater distances.

The Second Federation Ontology was introduced by higher S beings and reintroduced the best features of the First Fed ontology while operating in a way that also accommodated the much greater distances and timescales that civilization was operating over by this point.

The archai ruled empires that would eventually become the Sephirotic Empires started to appear and employ more advanced memetics that supplanted the ontologies of prior eras and which could accommodate a vast range of sophont beings across multiple species, substrates, and S-levels and keep them all working more or less in harmony across thousands of light-years of space and hundreds of millions of solar systems.

Or something like that. Basically, what you're suggesting is very much in line with the direction we've most recently been thinking of taking the setting in this area. Great minds thinking alike and allBig Grin

Hope this helps,

Todd
Reply


Messages In This Thread
RE: Gizmodo on mind uploading, featuring Anders Sandberg - by Drashner1 - 01-17-2019, 02:44 PM

Forum Jump:


Users browsing this thread: 2 Guest(s)