Skip to content

Free Will and Consciousness

April 4, 2012

TLDR:  I suspect consciousness is what it feels like for an algorithm to be self-aware.  Free will is the special case where the algorithm is a decision-making agent.

———-

Free will in a deterministic universe is nonsense.  When presented with choices A (to eat) and B (or not to eat), what does it mean to say you could have chosen either?  If, at the end of the day, in this universe, you only chose A (as I only do), in what sense could you have ever chosen B?  Even if there are splitting multiverses, if you don’t get too choose the splitting behavior and distribution, you had no free will.

You could imagine the universe is non-deterministic, though, and that you have some weird form of control over the input randomness, which we consider to be your will and your ability to make decisions.  Of course, the sense in which you have “control” is limited – the decision process you use necessarily contains randomness coming from a source outside the universe that you exist in.  So there would be random holes in the universe where randomness pours in, and they happen to come in at points of a human’s decision process.  Other decision agents, like monkeys, might get access to this randomness also.

But Occam’s razor (the principle that I should trust the universe to have good aesthetic sense) tells me that it would be simpler for the randomness to be part of the universe, and not only have to do with this extraneous concept of life/consciousness.  In fact, we should first attempt to see whether consciousness and free will might follow from what we already know about the universe.

I think it’s no stretch of imagination for most people to think of ants as self-replicating (with evolutionary errors) programs – complicated decision agents that interact with the environment via sensory inputs that trigger certain sections of code.  Since humans and ants evolved from the same process, and in fact, from a common ancestor, it would seem likely that humans are are simply massive programs, with better problem-solving abilities, more complicated ways of interacting with the environments, a more sophisticated decision-making process, more “emotion” buttons to press which alter program behavior, etc.  I think many people reject this obvious line of thought, because of the feeling that they have free will, which lets them make decisions, or consciousness, which gives them experiences.

But is it so strange for a computer-simulated human to truly feel like a human? But more generally, can any algorithm feel conscious, and like it has will to make decisions?  Well, a multiplication algorithm probably doesn’t “feel” like it is doing multiplication.  And a shortest path algorithm probably doesn’t “feel” like it is performing shortest path.  Let’s look at a decision-making algorithm, where the code does some calculations, and uses the result to decide which branch of an if statement to take, outputting either decision A or decision B as a result.  Did this algorithm feel like it was making a decision?  Our intuition says no – it was just part of the program!   But similarly, when we kick our leg upon having our knee hit in that one funny spot by a doctor’s rubber hammer, it feels like a reaction, not a decision – it was also just part of the program.  A calculator probably experiences a bunch of knee-jerk reactions that amount to things like multiplication of ten digit numbers.

So we should wonder – for humans, what is the difference between feeling and not feeling like you had a decision?  I suspect the answer has to do with self-awareness.  The decisions that don’t feel like part of our free will are precisely the ones we aren’t actively deciding about.  And intuitively, if I wasn’t aware of myself as a whole, I wouldn’t feel conscious at all.  In fact, I think humans are perfectly capable of becoming less “conscious”.  For example, I suspect most people feel a lack of free will and consciousness while dreaming.  An exception is lucid dreamers, who are aware that they are dreaming.  Furthermore, when we are babies, we seem to be significantly less conscious and self-aware.

So the issue seems to be that the algorithms mentioned were unaware of what they were doing.  Let’s put ourselves in an algorithm’s shoes.  If you spent a year doing nothing but multiplying 10 digit numbers, and had no self-reflection, you indeed might feel exactly like a calculator, with no sense of free will or consciousness.   Now we get to try to fit an algorithm in our shoes.  Recall the program which was weighing options A and B.  Suppose it is also aware of its own interaction with the environment through its input/output behavior, and considers heuristics or even simulation to predict happen in the environment if it were to output A (it gets fat), and if it were to output B (it gets hungry).  As it thinks about the counterfactuals about its own decisions, perhaps it feels somewhat like it has a choice!  More generally, it seems quite plausible that any program which is able to reason about its own existence might feel conscious.

I think this is by far the simplest and most likely explanation for why we feel “consciousness” and “free will”.  I think there’s a 75% chance it’s correct (the rest being mainly structural uncertainty, and to some extent, the fact that I still find it amazing I have the experiences I do).  It also has some interesting implications, which I’ll talk about sometime.

Acknowledgements:  Ideas essentially come from this Lesswrong post, and conversations with friends

Advertisements
Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: