You are on page 1of 13

Predictability of behavior Empirical and theoretical issues in the predictability of human behavior Introduction No curious person still lingers

on the mystery of our bodies origins. Where once our best explanation of the human form drew a line between the natural and the divine, we now see a continuum of evolved creatures. But few of us, even many who accept our natural origins, are ready to abandon all forms of the lan vital and see a human being as nothing more than a bundle of mechanisms. Most still cling to a dualistic view in which, through means unspecified, we somehow remain metaphysically autonomous agents. Galen Strawson put it this way: Almost all human beings believe that they are free to choose what to do in such a way that they can be truly, genuinely responsible for their actions in the strongest possible sense; responsible period; responsible without any qualification; responsible sans phrase, responsible tout court, absolutely, radically, buck-stoppingly responsible; ultimately responsible, in a word and so ultimately morally responsible when moral matters are at issue. If human beings are not free in that wayif our brains are the purely biological machines they give every evidence of being, built by genes, sculpted by development and life historythen what are we doing when we assess a persons status as a rational agent, grade their competence to make important decisions, or hold them responsible for their actions? How does our understanding of the brain bear on how we treat people and deal with each other socially? Neuroscience is more than on its way to resolving the conditional: it has arrived. We are not free agents. In this article, we will review some of the evidence leading to that conclusion and begin to address some of the questions it raises. Philosophical perspectives

The sort of freedom Strawson describes is called libertarian free will. Most people probably find it so intuitively obvious that they would flatly reject any alternative as preposterous. This proves hasty, because even without leaning on neuroscience, problems emerge rather quickly. For one thing, your apparent freedom is not absolute: do not think about zebras for the next fifteen seconds. That you cannot help from doing so shows that environmental stimuli (or an authors perversity) can override what you take to be absolute control over your own thoughts. Second, consider David Humes (1739) Treatise of Human Nature, in which he argued that free choices are not uncaused. Such choicesfree here meaning unforced, not coerced by other agents or circumstancesare not usually capricious, but rather careful deliberations based on desires, intentions, goals, prior experience, and so on. But this is equivalent to saying that if a third party knew your desires, intentions, etc. and also knew that you were acting according to them, then your actions would be, to the same degree as the perfection of the third partys knowledge, predictable. The alternative to predictability is randomness, but an overriding commitment to random acts is not the kind of free will we think we have. If, in order to be free, one must make only those choices that have absolutely no correlation with anything we want, know, or believe, then freedom, it seems, has been relegated to the basest form of chaos. The more schizophrenically one behaved, the more free one would be. Indeed, the mere fact of wanting somethingconstrains our freedom, not only because it biases our behavior in predictable directions, but also because we do not seem to be free to choose what we find pleasing. As Schopenhauer said, you may be free to do what you want, but you are not free to want what you want. As a corollary to Humes argument, one can see quite clearly a first cause problem: if we were ultimately responsible in this way, then we would stand, as it were, at the beginning of causality. We would be causeless causes (causa sui), origins unto ourselves, or to philosophers, agents of originative responsibility. An axiom of methodological naturalismof science itselfis that every phenomenon, every effect in the universe has

a cause. Every one, that is, except for the wayward decisions of a particular Earth-bound primate. Either the presumptuousness or the illogic of that position alone ought to constitute sufficient grounds for skepticism. However reasonable these rebuttals may be, philosophers have a special talent for squeezing the maximum possible confusion from any given problem. Is there any scientific problem with free will? Are there any data? Neurophysiology of motor intentions Benjamin Libet (1983) conducted the seminal experiment. Human subjects outfitted with an EEG cap were asked to watch a rapidly moving clock hand and move a finger whenever they wished. Once they moved, they indicated where the clock hand had been when they first became aware of their intention to move. This moment turned out to average about 200 msec before motion onset. Libets EEG recordings, however, showed that a readiness potentiala discernible electrical signal in the cortexhad begun about 1400 msec before the movementthus more than a second before the subject reported a conscious intention to move. While the length of that duration depends on experimental particulars, the basic ordering of unconscious preparation first, followed by intention, has been replicated many times. This clearly suggests that conscious intention is not the proximal cause of behavior. Lau et al. (2004) extended Libets results by upgrading from EEG to fMRI. In the time leading up to movement, neurons in the presupplementary motor area (pre-SMA), dorsal prefrontal cortex (DPFC), and intraparietal sulcus became active. The precise functional role of these areas has not been firmly established (see Roskies, 2010 for a review), but it appears that the pre-SMA may be more closely linked to actual execution, while the other two areas collaborate in generating forecasts of motor plans. More on this below. It is well known that brain injury can impair our ability to execute a wide variety of cognitive tasks. Motor and speech problems, for example, are usually evident to the patient, memory loss and

cognitive impairment perhaps only to others. Discoveries of brain correlates of volitional movement, such as those described by Lau and others, raise the possibility that lesions to those areas could alter the phenomenology of volition and free will itself. To that end, Sirigu (2004) found that patients with parietal lobe lesions could accurately assess the time at which a movement had begun, but had impaired awareness of their own intention to move. Rather than the 200 msec warning, their intention to move entered consciousness only 50 msec prior to movement. Desmurget et al. (2009) showed an even subtler functional dissociation. Human patients undergoing awake brain surgery had electrodes inserted into their inferior parietal lobes and frontal premotor cortex. Weak stimulation of the parietal lobe actually induced feelings of intending to move; stronger stimulation enhanced this feeling to the point that patients claimed that they actually had moved, when in fact they had not. Stimulation of the premotor cortex did the reverse: induced an actual movement but without the patients awareness. In related studies of healthy subjects (no surgical intervention), Ammon & Gandevia (1990) and Brasil-Neto at al. (1992) successfully altered free choice of which hand to move by interfering with the supplementary motor area via transcranial magnetic stimulation (TMS). Subjects choices were strongly influenced but the subjects themselves disavowed any such influence. Roskies (2010) reviews a number of primate studies in an experimental paradigm first described by Newsome et al. (1989). A monkey views a field of moving dots and moves a joystick in the direction the majority of the dots are moving. Correct answers are rewarded. Neurons in the lateral interparietal area (LIP) play a key role in this task. When active, they signal the subjects choice of an action plan (e.g. move the joystick to the right) and the expected hedonic value of that action (e.g. how much of a reward the subject anticipates as a consequence of the behavior). By stimulating LIP, experimenters can bias the response and cause subjects to (for example) select a rightward joystick movement when in fact the visual stimulus was not coherently to the right. Roskies notes that LIP is probably not unique in this regard and may be but one example of a whole class of modality-specific (in this case, visuomotor) decision-making circuits. There may be

modality-independent circuits as well: neurons in dorsolateral prefrontal cortex (DLPFC) can also be used to predict behavior, and also have firing rates that vary with expected reward value. A general though admittedly not universal pattern begins to emerge from these and related studies. With respect to bodily movement, neural ensembles in the dorsal prefrontal cortex (and perhaps parts of the parietal cortex) compete with each other to determine the action plan that will maximize expected hedonic return. During this phase, subjects are not aware of having formed an actual intention to take action. Once this competition is complete, the winning motor program is sent both to the supplementary motor area and to inferior parietal cortex. The motor area activation produces the movement but no awareness of intention. The parietal activation generates a prediction about what the body position (and perhaps too a broader representation of the spatial arrangement of the world around us) will be once the motor program is complete.This prediction of the way the world will be, once the action is complete, is identical with the phenomenology of intention to move. On one hand, this seems, in hindsight, not altogether unsatisfying. The parietal lobes in general provide a variety of spatial processing services. Inasmuch as an intention to move is a forward-looking model, an envisioning of a goal state, and inasmuch as we are talking about physical movement, it seems reasonable that a clear spatial picture of that goal state could be closer to intention than, say, the means by which we will achieve it. Iintend to exit the room. I dont intend to lift my left knee, flex my quadriceps, etc. The intention is the goal state, and the goal state is an updated spatial arrangement of body- and world-state. Since such representations are the business of the parietal lobes, the model seems at least plausible. Psychological mechanisms: Why do we feel free? As disconcerting as it may be to confront the possibility that we may not be as in charge of ourselves as we thought, there seems to be no getting around the fact that even if we arenot in charge, we feel as if we are. Under normal conditions, we do not feel like

passive observers of our own behavior (as we do with, for instance, the patellar reflex), coerced by external forces, or compelled to move on pain of internal discomfort. Rather, we feel that we are in the drivers seat of our bodies. At each moment we can turn or accelerate in whichever direction we like. Having chosen, we feel in retrospect that we could just as easily have done otherwise. As the results above reveal, however, at least some of these impressions are mistaken. Perhaps we might have done otherwise, but if so, it wouldnt have been conscious choice that made the difference. Our intention to move is not the proximal cause of our movement. So why do we feel that it is? One answer may follow from the competing neural ensembles model described above. In that model, discrete populations of neurons in the frontal cortex simulate alternative behaviors. The ensembles then compete, one motor program wins, and is acted on. Haggard (2005) suggests that the brain, in the interests of computational efficiency, allows the representations of all losing motor programs to dissipate. From an attentional perspective, it would be overwhelming to be aware of all those things that we might have done but didnt. While the neural ensembles representing unused programs shut down and therefore lost to conscious awareness, the winning program can beand possiblymust bekept active so as to compare the predicted motion with the actual motion and thereby generate an error signal used in learning and online behavioral correction. The outcome of this programmatic pruning is a reliable sequence: a single motor program followed by movement. We interpret such consistent temporal correspondences as causal chains. In a related model, Wegner (2005) asks us to imagine a magical process by which we could always know when a particular tree branch was going to move, and in which direction. Further assume that by the same magic, we would always happen to be thinking about the tree branchs motion just before the event. Observing the reliable sequence of eventsour thought, followed soon after by the real motion of the branchwe could scarcely fail to conclude that our thinking of it was the cause of the motion. And yet, the setup can stipulate from the beginning that no such causal connection exists. Wegner proposes thatattending to ones

model of oneself before a behavior gives rise to the sense of causal agency. To unpack the metaphor, we need a source of magic, which could spring from the brains representation of time. We know from other examples that the brain plays tricks in this way. Given neuronal conduction velocity, for example, we should see our foot touch the ground before we feel it, but we dont. The events seem simultaneous. This could be because our brain takes the sensory signal and, as it were, spoofs the timestamp. Inferences of causal agency can be affected through interventions much simpler than the neural-level ones mentioned above. Gibbons (1990) asked subjects to imagine you are rushing down a narrow hotel hallway and bump into a housekeeper who is backing out of a room. If subjects were asked this question while facing a mirror or hearing their own voices on tape, they became more likely to say that they were the cause of the collision. This is probably not trivial, because judgments of agency imply a world model and chains of causes and effects. What Gibbons showed was that models of ones own agency could be changed via seemingly minor cognitive manipulations. The moments in which we are attending to ourselves as agents in the world are of special importance in evolutionary terms, because they are moments when the process of learning about our ability to influence our surroundings is in full force. Our attention is more engaged in these moments than it is when we execute a well-worn routine. This attentional asymmetry produces a textbook example of confirmation bias: the moments when we are most alert and therefore most likely to remember are exactly those moments in which we are most likely to experience the illusion of causing our own behavior. In hindsight, our own causal agency stands out as a key feature of conscious existence. Implications and applications Surely most audiences outside the particular psychological and neurophysiological subfields discussed here are unaware of these findings, so it should not surprise us that they continue to see themselves as buck-stoppingly free. Suppose, however, that they

were aware of these findings. What would be the result? One might expect lukewarm reactions, not least because weakening the causal role of intention seems to have disturbing implications for the law, criminal law in particular, and other areas of applied ethics. As we will see, however, some of these concerns are dispatched by reference to de factocurrent practice. Moreover, the intuition that libertarian free will is necessary for attribution of responsibility stands, on deeper reflection, in direct conflict with reasonable definitions of a functioning system of justice. -But let us start with a summary of current practice. Philosophers of criminal justice make a distinction between consequentialist and retributivist motivations for punishment. Consequentialist motivations include deterrence (dissuading would-be criminals), incapacitation (preventing future bad acts from a criminal already caught), rehabilitation (turning a criminal into a productive citizen), and restoration (restoring something to the victim(s) of a crime). These are all practical concerns motivated by the desire to create a functioning society, and it does not take a great imagination to conceive of such efforts as finding harmony with even strongly mechanistic views of the individual. Deterrence, for example, requires nothing more or less than a careful study of behavioral conditioning. Understanding the principles involved and being able to apply them to a desired effect is, to be sure, an exceedingly complex problem, not least because the governments of free societies cannot control most of the influences on a given person. But this is a practical concern. The conceptualization of a person as a bundle of mechanisms is well suited, as far as it goes, to a science and technology of deterrence. The other three consequentialist motivations can be similarly grounded. Retributivist motivations for punishment, in contrast, are based on the idea of moral redress. In that sense, they look backward in time and dispense punishments proportional to the infraction. Greene & Cohen (2004) summarize the retributive philosophy as one under which we legitimately punish to give people what they deserve based on their past actions in proportion to their internal wickedness, to use Kants (2002) phraseand not, primarily, to promote social welfare in the future. Despite its

seemingly metaphysical basis, retributive justice is the dominant philosophy in many, if not most countries, including the United States, partially because rehabilitation in practice has proven so difficult. Unlike consequentialism, retributivism seems less easily reconciled with the picture of man as a bundle of mechanisms, lacking in originative responsibility. This appearance, however, loses some significance when put into context with the way criminal prosecution actually operates. As summarized by Morse (2007): The law does not treat people as non-intentional creatures or mechanical forces of nature. The law treats persons, including people with mental disorders, as intentional creatures, agents who form intentions based on their desires and beliefs. Mental health laws treat crazy people specially not because the behaviors of crazy people are mechanisms, but because people with mental disorder may lack sufficient rational capacity in the context at issue. In other words, they were or are not responsible for their legally relevant conduct. The central question, in other words, is not whether a person is buck-stoppingly free, but whether they have a general rational capacity sufficient to determine the likely consequences of their actions. So although Morse explicitly disavows the mechanistic view of a person, the role he assigns to intentionality entails no commitment to originative responsibility. Indeed, the description of intentions as things based on desires and beliefs suggests a view in which intentions constitute plans that will plausibly bring about the realization of whatever goals the person may have. If a person lacks the capacity to accurately forecast the likely results of their actions, or to understand that those results are illegal, they become candidates for exclusion from legal responsibility. In the United States, many insanity defenses, including the widely used MNaghten Rules, express logic of this form. The same argument can be applied to cases in which the defendant is too young to have developed the requisite cognitive capacities (Roper v. Simmons).

In this light, the neurophysiological understanding of behavioral flow becomes not only not inimical to the assignment of legal responsibility, but downright friendly. One can imagine, for instance, using our understanding of the neurophysiological processes described above to probe a defendants actual ability to forecast consequences, rather than relying on measures as indirect as holistic psychological assessments obtained via interview. One final point. Suppose you endorse retributive justice because you believe a criminal possesses originative responsibility for his actions. You cannot admit into that picture the notion that he did what he did because he lived in oppressive poverty, was abused as a child, etc. The word because indicates that one is about to supply a set of causes that sufficiently explains an effect. Under originative responsibility, there are no antecedent causes, or at least not sufficient ones: choice stands as the ultimate go/no-go arbiter of behavior. Presumably you, like the criminal, possess originative responsibility, in the sense that external events do not deterministically cause you to think in any particular way. What, then, is your responsibility in wishing that punishment be meted out to the criminal? It seems obvious that your desire to punish follows from the commission of the crime. But beware: said crime cannot be a sufficient cause for your desire for justice. If it were, then your desire would be psychologically predetermined, and as a retributivist, you deny just that kind of determinism. The crime can push you so far, but ultimately,your decision to punish or not punish has no sufficient cause. Its ultimate origin is your own freefloating will. For your punishment to be justified, it must not be arbitrary. But if it is not arbitrary, it is determined. Retributivists are caught in a Humean trap. They must choose either arbitrariness or causal determination as the master of human behavior.

Consider a hypothetical: An active, healthy 35-year-old man is injured in a car accident, paralyzed at C1, and dependent on a feeding tube. He becomes severely depressed and expresses his wish that the feeding tube be removed and he be allowed to die.

One assessment of this patients condition might regard such feelings as acute and understandable, perhaps even predictable consequences of the injury. On that view, the patients suicidal ideation is not under his volitional control, but rather, is the result of psychological and neurological causes which we understand at least in part. Being able to explain things in this way seems to strip from the patient some sort of metaphysical ownership of his own thoughts. These are not his feelings, but organic consequences of external events. When he says he wants to die, therefore, our instinct may be to hear the depression talking, judge the man as temporarily not in possession of sufficient rational capacity, and on that basis deny his request to be left to die. In medicine, both competence and informed consent rest upon the same sort of rational capacity as that required by law when determining responsibility. One widely used standard is Gillick competence: a person is judged competent when they have sufficient comprehension and intelligence to understand fully what is proposed. This test may be applied to minors, adults with mental illness or brain injury, or those making decisions under conditions (e.g. extreme stress) that compromise their mental capacities. It is no coincidence that these are the same populations and conditions treated as special cases in law more generally. Under the provisional model of decision-making described earlier, neural populations in the dorsolateral prefrontal cortex (and elsewhere anatomical details are not essential here) were identified as likely candidates for computing the consequences of a behavioral plan. In the hypothetical case we are now considering, the mans DLPFC is still computingsomething: it is not as if he cannot make any forecasts whatever. Indeed he can make them, has made them, and has chosen a plan that to him seems maximally rewarding. Our problem with his decision seems at least partially due to the fact that we find it sopredictable: he has revealed himself not only as a bundle of mechanisms, but as one whose structure is disappointingly transparent. This offends our intuitions about ourselves and the amount of computational complexity we expect for normal rational processes. If we are to

be mechanismssomething barely tolerable anywaywe should at least be complicated mechanisms. Only if this mans motives were more obscure would we become comfortable that he was indeed himself. Whether this sort of thought process really does play or ought to be allowed to play a role in clinical determinations of competence is a question beyond the scope of this paper. There is a curious connection here to a rather different body of literature, but one rich enough to warrant mention. The field of complexity science studies, among other things, the conditions under which systems can compute. A system composed of interconnected elements that remain utterly static, no matter what inputs are provided, is clearly not capable of computing, no matter how those elements may be wired together. Contrarily, a system composed of elements with internal states so chaotic that they maintain no correlation with the past is also incapable of doing interesting calculations. Langton (1990) applied to such systems the language of statistical mechanics in general, phase transitions in particular. The first system is seen as frozen, a solid. Such systems are perfect for storing memories, since the internal states are so non-volatile. The latter system is seen, metaphorically, as a hot gas. In between are liquids: thermodynamically loose enough to allow changes to one elements state to propagate to other elements: information flow. In the hot gas, information flows so easily that small perturbations, even those caused by noise, propagate quickly and threaten the elemental stability required for long-term memory. The upshot of this metaphor is that interesting computational systems seem to exist at some kind of computational phase transition, balanced between the needs of stable memories one the one hand, and information transmission on the other. Kauffman (1993) suggests that life itself exists at such a phase transition, and that evolution is tuned to maintain the balance. Living things too computationally frozene.g. those with no mutationcannot evolve, while those too liquid cannot maintain stable identities or store adaptive solutions they discover. It seems just possible that our intuitions about the defining characteristics of consciousness obey some of these same rules:

solid enough to be lawful, rational engines, liquid enough to react to changing stimuli and defy predictability. Minds that sway too far to either side are treated specially.

You might also like