Posts: 272
Threads: 28
Joined: Mar 2013
06-22-2015, 03:13 AM
(This post was last modified: 06-22-2015, 03:14 AM by chris0033547.)
Look at this:
http://googleresearch.blogspot.co.uk/201...eural.html
And here's the picture gallery:
https://goo.gl/photos/fFcivHZ2CDhqCkZdA
This could be a precursor for what will be known as Perfect Art millennia from now. ;-)
"Hydrogen is a light, odorless gas, which, given enough time, turns into people." -- Edward Robert Harrison
Posts: 7,345
Threads: 297
Joined: Jan 2013
I saw this story the other day, really neat. As I understand it the images are created by getting image recognition software to look for a specific pattern in a picture (either of something or simply a picture of static) and then get the software to alter the image to emphasise what it's found. That new image is then fed back into the software again so that over time it keeps finding and drawing patterns. One of the useful things this has done is the dumbbell picture, the image software added arms revealing that it had erroneously learnt that dumbbells have arms, probably because in most pictures people are using them.
Things like this are interesting but how they help is even more fascinating. I can't help but wonder if this could also be an early indication that if we don't teach neural networks correctly they can find patterns where none exist. The last thing we want if we start using smarter and smarter software to run our lives is for them to go off the deep end and find conspiracy theories everywhere! Less flippantly a characteristic of schizophrenia is perceiving patterns in random data (it's called apophenia), don't want that in our software.
OA Wish list:
- DNI
- Internal medical system
- A dormbot, because domestic chores suck!
Posts: 272
Threads: 28
Joined: Mar 2013
(06-22-2015, 03:34 AM)Rynn Wrote: if we don't teach neural networks correctly they can find patterns where none exist. The last thing we want if we start using smarter and smarter software to run our lives is for them to go off the deep end and find conspiracy theories everywhere! Less flippantly a characteristic of schizophrenia is perceiving patterns in random data (it's called apophenia), don't want that in our software.
It's also interesting to look at the various comments for that google blog entry. Many people mentioned that some of the pictures look like something a human would see under the influence of LSD. If one knew what kind of neural networks get enabled or disabled in the brain to reduce the brain's ability to accurately process the images from the eyes during an LSD trip that knowledge could be used to improve the algorithms that train the artificial neural nets and one could also come up with new designs for neural nets. I wonder whether they ever performed any experiments like this with rats or mice.
"Hydrogen is a light, odorless gas, which, given enough time, turns into people." -- Edward Robert Harrison
Posts: 7,345
Threads: 297
Joined: Jan 2013
06-22-2015, 08:58 PM
(This post was last modified: 06-22-2015, 08:59 PM by Rynn.)
There have been plenty of studies of LSD in animals such as mice, pretty difficult to determine what (if anything) they are hallucinating though. It's not like you can get them to draw. Although having said that elephants can be taught to paint and they have been given LSD in past experiments. The latter didn't turn out so well though and good luck getting ethical approval for that, let alone any useful data. If you wanted to explore this phenomenon then the best way would be to get human volunteers, give them LCD, get them to hallucinate looking at a blank page, get them to draw it and image their brain all at the same time. Again though you might not get any useful data, and having seen people on acid before I wouldn't like to be the experimenter trying to keep them on task.
And after all that I'm skeptical you'd learn enough to improve computer algorithms. Brain imaging isn't good enough to show you the specific neural networks in play, it simply shows what areas of the brain are active. You'd have to record all that, do some solid work to be sure that your results are relevant (and not caused by other stimuli) then look for those pathways in serially sectioned brains, map them and try to model them. The modelling itself is also going to be extremely hard because sectioned brains are static data and incomplete data at that (neural weightings for example are unknown).
In other words the level of effort is very high for an unknown amount of reward.
OA Wish list:
- DNI
- Internal medical system
- A dormbot, because domestic chores suck!
Posts: 233
Threads: 15
Joined: Nov 2014
This is pretty spectacular... and in addition to being spectacular, could be quite inspiring for OA purposes! Some thoughts:
In commentaries, I've seen people considering expanding this into 3D and/or video, as well as an audio version. One person comparing it to drugs also said that something similar may go on with hallucinogens in terms of hypersensitivity to meaning and emotion. With the resources of OA, I could easily see beings having the ability to generate incredible multisensory virtualities out of random data, encompassing all the senses, not only external but internal perceptions as well. Which could be really interesting. And there'd be more control than, well, drugs, especially since they could prime them with whatever sort of data they want. Potential for entirely new genres of art, recreation, storytelling...
This could also, of course, go incredibly wrong. Hypersensitive AI, or just sensitive to the wrong things AI (like the dumbbell thing) could be major issues in the Interplanetary Age. I wonder if something like that could've been a problem in the Nanodisaster.
I also think this could provide an insight into the sort of thing that might go wrong during Bloatware. So much processing power... no idea what's significant... no idea how to integrate it... EVERYTHING IS FOUNTAINS AND ANIMAL PARTS! Not literally, but this could help look at the psychology of transapient madness...
(On a personal note, I may do something like that with Dragon Liver, overseer of Dilmun system, who's going to come down with a bad case of bloatware.)
Another area of particular relevance is the Panvirtuality. I could easily see denizens of the Panvirtuality deriving from these sorts of AI hallucinations. For that matter, entire Panvirtuality realms could come from them. Imagine a personality detector/emulator that gets hypersensitive and creates "uploads"... really new AI... out of random noise! Or just taking a bunch of pattern recognition or simulation data and turning it inwards, like if the realms of fountains and arches became full environments. Especially in consideration of the recursive possibilities, wherein zooming in on one environment reveals more inside. I could see these kinds of things coming into existence both intentionally and unintentionally... depending on their basis, they could range from quite recognizable (if surreal!) to completely alien.
For that matter, the Sephirotics are diverse enough that some stuff like that could exist there too, it just seems like one potential way of fleshing out (byte-ing out?) the Panvirtuality.
Posts: 7,345
Threads: 297
Joined: Jan 2013
(07-07-2015, 03:24 PM)TSSL Wrote: This could also, of course, go incredibly wrong. Hypersensitive AI, or just sensitive to the wrong things AI (like the dumbbell thing) could be major issues in the Interplanetary Age. I wonder if something like that could've been a problem in the Nanodisaster.
The potential for AI mental health problems seems just as strong as humans, and this is a pretty good example of what it could look like. Tweak your pattern recognition algorithms to try and get an intelligence boost and you may become over sensitive, seeing complex relationships where there are none. The last thing you want from an AI is for it to become a paranoid schizophrenic
The nanodisaster angle is a good one. You should suggest it to Todd in the nanodisaster thread there could be a variety of malware plagues (or just plain inappropriate AI engineering) that through AI pattern recognition out of wack. The repeated failure of the AIs running the ecosystem life support infrastructure could perhaps be explained by this. Things like this wouldn't be solved until the first fed era when stability programs ("Embedded Rationality?" "Internal Consistency Checker?") smart enough to prevent these mistakes arrive on scene.
OA Wish list:
- DNI
- Internal medical system
- A dormbot, because domestic chores suck!
Posts: 11,671
Threads: 451
Joined: Apr 2013
This video seems to use the same technology, and is seriously trippy.
http://hplusmagazine.com/2015/07/03/vide...cial-mind/
Posts: 233
Threads: 15
Joined: Nov 2014
They've released the source code! http://googleresearch.blogspot.com/2015/...izing.html
Me, I know next to nothing about programming, so I'm not sure I can do much with this, but for those of you who do, here it is!
There are also, apparently, already a couple sites that'll process uploaded images for you. Sounds like those are restricted to the dataset that tends to result in random animal parts, especially dogs.
Personally, I like the MIT dataset the best, so I'd be more excited if there's a way to mess around more with that. I'll have to investigate.
Posts: 233
Threads: 15
Joined: Nov 2014
New thought!
I wonder if a similar process could be an early source of primitive personality emulations. Like, take a neural network and train it on a bunch of my forum posts and journal entries and text messages and emails and essays and stories and poems and any other written stuff we can find until the neural network is trained to recognize "yeah, this is something TSSL would say" and "no, TSSL would never say that." And then run it backwards so that it starts to write things that I never actually wrote. A TSSL-emulator. It could fit in with the simms and scions that the "Early History of Uploading Technology" article discusses as precursors.
Posts: 272
Threads: 28
Joined: Mar 2013
"Hydrogen is a light, odorless gas, which, given enough time, turns into people." -- Edward Robert Harrison
|