The Orion's Arm Universe Project Forums





Better late than never
#31
Yes, you could, but building dedicated hardware is the more efficient solution, and you'd have a huge budget for a project like that. The first AI would be built on dedicated hardware, and may later be run using emulation software on conventional computers.
Such hardware would involve some form of physical neural network, the only model experimentally tested to produce sophonce Big Grin
Its presapient predecessors, the distant ancestors of which are being worked on now, however, could very well use emulators. Eventually they'd get too hard to work with, though, as complexity increased.

My lifelong goal: To add "near" to my "baseline" classification.

Lucid dreaming: Because who says baseline computronium can't run virches?
Reply
#32
Even if a 'purely software' AI could be created as a first attempt (rather than as a later generation/more advanced version), I would hope that the creators wouldn't just give it the means to operate in an unrestricted manner on the internet.

Of course, it isn't that hard to think of a scenario in which even a human level intelligence might hide nasty tendencies for years (humans are known to do that now), and that could be even trickier with a super intelligence - which might or might not be able to manipulate its creators into setting it free - but this kind of risk already exists in variant forms with almost any of the kind of techs that OA is built on, really.

Even if no AIs are ever created, humans augmenting their intelligence will figure out how to do nasty things. Humans controlling nanotech, or biotech, or advanced self-repping automation will figure out how to do nasty things, etc.

Life is not without risk, basically.

Todd
Reply
#33
Of course emulation is possible, but (except when emulating systems which natively run much more slowly than the host system) emulations usually are quite inefficient compared to running on optimized hardware. I suspect an emulation of IBM's Watson hardware would be painfully slow, for example.
Selden
Reply
#34
Augmented human Transapients could well behave even worse than their AI counterparts without proper guidance. With AI, there is the possibilility of developing an intelligent being that lacks the urges and impulses that lead humans to such bad behaviour.

It makes me wonder whether there are societies of virtual sophonts who edit their minds so that doing the right thing comes easier to them.
Reply
#35
(03-25-2018, 12:41 AM)extherian Wrote: Augmented human Transapients could well behave even worse than their AI counterparts without proper guidance. With AI, there is the possibilility of developing an intelligent being that lacks the urges and impulses that lead humans to such bad behaviour.

It makes me wonder whether there are societies of virtual sophonts who edit their minds so that doing the right thing comes easier to them.

Of course, there are! And of physical sophonts, too. Sometimes voluntarily and sometimes not....
Selden
Reply
#36
Well, human transapients canonically lose all human urges, feelings, and indeed everything else human about them, as it's all childish to them now. What they become is pretty much entirely random, actually.

My lifelong goal: To add "near" to my "baseline" classification.

Lucid dreaming: Because who says baseline computronium can't run virches?
Reply
#37
(03-25-2018, 04:56 AM)Alphadon Wrote: Well, human transapients canonically lose all human urges, feelings, and indeed everything else human about them, as it's all childish to them now. What they become is pretty much entirely random, actually.

Yeah. It's like comparing a fertilised egg to a nobel prize winner. Sure they have a bunch of fundamental similarities and one came from the other but expecting the nobel laureate to act like the egg is wrong.
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Reply
#38
So ascending basically amounts to suicide for the sophont involved, then. They might as well create a Copy and have that being ascend instead.
Reply
#39
(03-25-2018, 05:50 AM)extherian Wrote: So ascending basically amounts to suicide for the sophont involved, then. They might as well create a Copy and have that being ascend instead.

Do you consider growing from a baby to an adult suicide? One weird part of the ascension process is that subjectively it may take far longer than seen from the outside, so whilst a sophont witnessing a friend ascending might consider the resulting transapient that appears a few minutes later a stranger for the transapient it's been a continuous, step-by-step experience.
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Reply
#40
No, I just took your "fertilised egg to nobel prize winner" too literally Tongue. Growing from child to adult is a good metaphor. But just as childhood trauma can make for dysfunctional adults, could an unprepared modosophont not end up turning into a delusional transap? Especially if it happened early in the timeline, when mature transapient societies did not yet exist.

What I was trying to say was that a specially designed AI might make for a less risky ascension to S1 than an uploaded human, at least if it was a civilisation's first time venturing into this field.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)