The Orion's Arm Universe Project Forums





What the heck is consciousness anyway?
#1
Wall of text warning: I'm posting an essay here, and it's long. I would very much like to discuss it, though, and I think the OA forum might be interested in it. So without further ado...



In the business of empirically attempting to create consciousness, some of the basic issues I need to address are currently considered to be philosophical. But philosophy is the business of asking questions, and doesn't seem to be particularly concerned with definitely answering them so much as it is concerned with discussing what answers we prefer. We have not discovered the world to be so easily explained by what we would prefer to be true prior to carrying out empirical experiments. The world is not usually consistent according to the concepts we start with, any more than chemical reactions can be adequately explained by the view held by ancient natural philosophers which supposed all matter was composed of the elements fire, water, earth, and air.

The world's symmetry, consistency, and indeed the concepts that support that consistency, require us to discover them. When we carry out empirical experiments, we discover in what ways the concepts we started with are wrong, and many things that once were considered to be matters of "Natural philosophy" - light, magnetism, the movement of stars and planets, etc, became sciences grounded in knowledge as our theories converged toward better approximations of ground truth. Philosophy has a very important role as the foundation or starting point of sciences, and as a source of the theories that scientific inquiry must support or disprove. But philosophy itself is pre-scientific.

The fundamental problem staring us in the face here is that we have no blinking clue what consciousness is. The thing I am trying to create here, even if I am successful, will by many be considered to be merely a simulation, and by many others to be merely an approximation. And depending on what philosophical definition we give to consciousness, those people can be considered correct. Some people consider anything that can pass the so-called 'Turing Test' to be conscious, but we've already done that experiment; I've worked on chatterbots before, and bluntly they aren't. People unfamiliar with their workings can see them as conscious for the same reason that we see human-ish faces in electrical sockets and in automobile headlights and grills. the Turing Test is not a good test because we are not good judges. Humans anthropomorphize the universe (it hates that!) and are predisposed to see creatures like ourselves where none exist.

And yet I confidently assert that chatterbots are not conscious, when we have no working understanding of what consciousness is. Clearly I'm using some definition of consciousness or I could not so clearly state that. That leads back to philosophy. At some point I have to decide what my working definition of consciousness actually is. I've said before that I consider intelligence to be something that exists in some measure in many different systems; that I consider ELIZA, for example, to be about as smart as grass, and modern chatterbots to be about as intelligent as clams, for example. Intelligence is relatively easy to define, at least for me; it has to do with the number of different states a system can be in, and how readily or correctly or appropriately to its existential purpose it transitions among its states. Consciousness per se, my goal and grail, is something else.

I think that consciousness means that there is an experiential subjective reality. It is the difference between electromagnetic radiation at 700 nm wavelengths, and the color red. It is the difference between the knowledge that a physical instrumentality has sustained damage, and the experience of pain. It is the difference between objective information and subjective experience. As Douglas Hofstadter said in Metamagical Themas, consciousness means that there is something that it is like to be the conscious thing. Unlike intelligence, I think that it exists in different kinds rather than in different amounts. Cats and bats and rats are clearly intelligent but not as intelligent as people. On the other hand they are also clearly conscious, but this is a difference in kind rather than degree; they are not conscious like people.

What are the differences? Lacking a clear definition for subjective reality, we have no way of stating what the differences are. But different kinds of creature have different experiences of subjective reality, and it goes much deeper than the fact that dogs have no experience of red due to lacking that kind of cone cell in their retinas. I see what I believe is evidence of deeper differences.

Dogs, for example, are pack animals; they cooperate in hunting, share territory cooperatively, and eat together from the same food sources at the same time. Therefore they have a subjective experience of cooperative concepts like pointing; wolves even point game for each other during hunts. When you point at something, a dog usually knows to look where you're pointing. Because they are intensely cooperative in the way they do things, a dog has a subjective experience of cooperation succeeding or failing; a dog is eager to please. Because they eat together at the same time, their experience of eating is governed by a simple principle; the faster you eat, the more you get. According to a dog's reality, the subjective experience of eating implies haste; the winner is the one who eats fastest. Finally wolf packs share territory with each other, but not with other packs; a dog experiences a strange dog as a threat to territory, and responds accordingly. A stranger on its own territory is not a threat, but a stranger on OUR territory must either be explicitly allowed by the pack leaders, or must be dealt with.

Cats, on the other hand, are pride animals. They hunt alone, share territory socially rather than cooperatively, and have the exclusive first opportunity to dine on their own kills. They share food when they choose to, usually with their mates or their own offspring, and if getting food is unreliable, often can gain a survival advantage by saving some for later. Because they hunt alone, pointing is not part of their subjective reality. When you point at something, the cat simply sees you displaying your finger and interprets that as a social act, intended perhaps to express something but the cat expects that the thing you're expressing will be social rather than intended to convey information about the world. And when cats eat, only if they are hungry and very confident of tomorrow's food supply will they eat quickly; otherwise they'll be returning to their food at intervals throughout the day, or throughout several days. A stranger is a threat to a cat's own safety or to its social group, not a threat to the pride's exclusive control of the territory, and strangers are dealt with in the same way no matter where encountered. And a kitten gets food when mama gives up on walking and lays down, so a kitten's instinct when it wants food is to get underfoot and make it hard for the one providing its food to walk.

Dogs and cats have a different subjective experience of the same objective reality, because down to the bottom of their brains, their subjective concept maps are different and support different sets of instinct and different modes of experiencing the world. Therefore I consider them to have a different kind of consciousness from each other, or for that matter from ourselves, although we can understand each other fairly well.

Human consciousness is very much about our intelligence; Intelligence, after all, is our primary method for dealing with the world. We think of consciousness in terms of intelligence, but that isn't necessarily a characteristic of consciousness in general. Other creatures, if they thought of consciousness at all, would probably think of it in different terms.

But how far down the intelligence scale does consciousness, as a concept, go? Is there a subjective experience of being, for example, grass? I'd answer with an unqualified no. Of being a clam? I doubt it. Of being a cat, or a dog, or a rat? Well, certainly. Cats and dogs and rats make decisions about how to deal with the world, according to an understanding of both the world and their own abilities within it formed by experience, to meet a set of basic needs. They have realtime sensory feedback about the effect on the world of their own actions. They choose among both learned and instinctive strategies of meeting those needs. They evaluate options, and can reach different decisions when presented with equally-valid choices. When they experience pain they map it onto some aspect or condition of the experienced world and decide what to do about the condition, in addition to responding by simple reflex to draw away from the direction the pain comes from.

In some ways this merely regresses the question to what I mean by 'an understanding of the world,' which is also subjective and resistant to definition. But a few hypotheses that I'm willing to accept as working theories until I know enough to formulate experiments to test them, are that consciousness must be rooted in experience, knowledge of the world and one's own abilities in the world, a set of needs that must be met, different potential strategies to meet them, decisions that must be made of which strategies to pursue and how, and stimulus or input that includes the effects of one's own actions on the world and whether the state of the world meets one's needs.

So, whatever else consciousness is, consciousness as I understand it cannot exist in an agent powerless to affect its world, or in an agent without needs that must be met, or in an agent that does not get feedback about what its actions do in or to its world and whether it can bring about states of the world that cause its needs to be met.

Of course this is philosophy, not science. I have only convinced myself of something by insight into what I believe about it, not formed a theory that I have a definite means of testing against the world. And if these conditions are necessary for consciousness, I still have no real confidence that they are sufficient. I am convinced that any agent which does not have these things will never be conscious, but I cannot say with confidence that an agent which does have these things is or can be conscious lacking some other things which I don't even have the concepts yet to express.

And I have not addressed the question of whether such a consciousness would be 'real', 'simulated', or 'approximated', which is a question I consider fatuous. It is my contention that there is nothing the human brain does that cannot be explained by the theory that it is made of atoms interacting according to the laws of physics. Humans, and cats and dogs and rats, are an existence proof that consciousness can be realized in a purely physical instrumentality. And if that is true, then there is no reason to believe that a physical instrumentality made of protein and water is somehow privileged over one made of silicon and metal.
Reply
#2
Have you looked into the current understanding of consciousness in neurosciences? It's obviously far from complete and there are many unanswered questions (from the hard problem to the exact relationship between intelligence and qualia) but it would be a useful starting point. I'd particularly recommend looking into the work of Daniel Dennett if you haven't already.
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Reply
#3
(12-16-2015, 08:44 AM)Rynn Wrote: Have you looked into the current understanding of consciousness in neurosciences?


I have. They've moved beyond philosophy to observation, but as of yet don't have sufficient means to formulate meaningful experiments, or even really say what their observations mean in terms of qualia. I've been climbing all over the literature in Neuroscience, as well as the literature in Theories of mind in philosophy, as well as the literature in Artificial Neural Networks.

The current neuroscientific observations show clearly that when such and such happens, so-and-so part of the brain is in use. They identify a 'reward pathway' which appears to be associated with pleasure, for example. They observe patterns in neural excitation which are correlated with apparent expectation and disappointment. They observe which parts of the brain are responsible for interpretation of sensory information, and identify some very regular patterns in those parts of the brain. All of which is good work, but none of which explains how the subjective experience emerges. Indeed, we still lack the concepts to say how subjective experience arises at all, because we still don't have a real definition for subjective experience.

(12-16-2015, 08:44 AM)Rynn Wrote: It's obviously far from complete and there are many unanswered questions (from the hard problem to the exact relationship between intelligence and qualia) but it would be a useful starting point.

Agreed, it was very useful. But that 'hard problem of consciousness' is still there. It's like the 700 nm electromagnetic radiation is one concept, and 'red' is a completely different one. They may pick out the same physical phenomenon or prompt, but they are different concepts, and one of them includes the idea that something exists which experiences it, but either concept can exist independently of the other.

I think consciousness doesn't exist until there's a context, purpose, or goal for the information one processes to become relevant to. A context provides a foundation or organizing principle for a map of concepts and world model which I think is a fundamental part of consciousness.
Reply
#4
To be honest I got frustrated with consciousness theories in my undergrad. It is fascinating and obviously it's informed a huge part of OA (AI rights and vote for instance) and has some profound real life consequences. But there is no good idea I'm aware of that addresses the hard problem. All proposals just redefine consciousness but don't account for quaila. Your context proposal for example: what about that system generates quaila compared to other systems? And more importantly how would one test for it?

From a more practical and empirical standpoint I think it is more useful to study the brain, in particular taking an empiral approach to comparing it during conscious and non-conscious periods. Given that that is ongoing work I've resigned myself to a long wait for an answer.
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Reply
#5
(12-16-2015, 09:29 AM)Rynn Wrote: To be honest I got frustrated with consciousness theories in my undergrad.

Me too. Most of those theories seemed to me very groundless, very much suppositions in the absence of knowledge.

(12-16-2015, 09:29 AM)Rynn Wrote: All proposals just redefine consciousness but don't account for quaila. Your context proposal for example: what about that system generates quaila compared to other systems? And more importantly how would one test for it?

I think that the subjective experience of desiring things is a fundamental symptom of consciousness, and that qualia are emergent as a means by which brains organize information about the world pursuant to meeting those desires. In the absence of a goal, information is meaningless. But when the information is meaningful, it's meaningful because survival or meeting one's needs depends on having a way to interpret, relate, and integrate it.

As for how to test for it, or even for detecting what qualia are as opposed to information? That's difficult. I think I suppose that qualia are much more integrated into the way everything else works, while information is a set of symbols that an intelligence can manipulate to the degree that intelligence is present. So if I were testing for whether or not something were 'qualia' I'd probably be looking for the degree of its integration with the rest of the system.

But according to someone else, 'qualia' doesn't necessarily mean what I think it means; as a subjective concept it resists definition.
Reply
#6
I'd like to offer an observation that may help this discussion, one related to the Turing test. (What some refer to as true intelligence and consciousness are IMHO at least very similar concepts, if not identical.)

It is impossible, from the outside, to determine for sure whether something you are interacting with is conscious. You know that you are conscious, but it is logically possible that everyone else is a soulless zombie that happens to be extremely cleverly programmed. And you have no way of knowing whether someone else's qualia are the same as yours, even if they refer to the same object.

However, interacting with other humans is much easier and more effective if one assumes that they are conscious. I presume that the same would be true if dealing with nonhuman sophonts; unfortunately, one can't be sure because it is far from certain that there are currently any such on Earth. (At the moment, possible candidates include dolphins, orcas, the "great apes" and possibly elephants.)
Reply
#7
(12-16-2015, 07:48 PM)iancampbell Wrote: However, interacting with other humans is much easier and more effective if one assumes that they are conscious. I presume that the same would be true if dealing with nonhuman sophonts

Sure, from a legal/ethical standpoint it makes sense that if you are reasonably unsure if something claiming to be sophont is indeed sophont then you should err on the side of caution and treat it as such. Better to waste time treating an object as sophont than to abuse a sophont by treating them as an object.

Having said that in OA we assume that understanding of consciousness/sophonce, what it is, how it is generated etc is far more advanced (makes sense given their millennia of work on AI, neural engineering, uploading etc) so in principle an OA civ could "open up the head" of a thinking object and tell you if it was sophont or just good at pretending to be.
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Reply
#8
Rynn - I'd also like to put forward the notion that sapience or the lack of it isn't the whole issue when it comes to dealing with nonhuman sentients, either. Dogs are not normally thought to be even minimally sapient (AFAIK) but that doesn't mean that torturing one to death for no reason is acceptable behaviour. It may not be acceptable even if you have a good reason for doing it.
Reply
#9
Sure, most cultures have a sliding scale of rights when it comes to sentience. There's no reason why OA civs wouldn't be the same.
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Reply
#10
Right. Dogs are sentient, not (as we think of it) sapient. That said, they're a heck of a lot closer to sapient than clams. I see intelligence as a question of degree. Sentients have qualia, at least for purposes of my working assumptions. They have emotions, feel pleasure and pain, desire things and desire to avoid things, and in general have reasons to care about the world. Any time you abuse a sentient creature, you accept some ethical responsibility for it, because you screwed up something that can feel bad about it, or feel pain.

Sapients? Not necessarily. Things that solve problems by manipulating symbols - "Intelligence" as we understand it - can exist without necessarily having a subjective experience, and such a thing could be abused, or shut down, without a qualm.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)