Bad philosophy can corrupt conclusions that are drawn seemingly straightforwardly out of scientific experiments. “Scientism” is one of those words that functions sort of like “cuck” or “RepubliKKKan” or “Christfag” in that using it often does more to signal allegiance to a group than it does to help progress conversations towards truth. In this essay, I want to give a very clear definition to the word “scientism”, followed by a very clear demonstration of a place where it does exist.
(Note: The theme of “scientism” was recently introduced in “Breaking (Down) Bad (Philosophy of Science)”).
“Scientism” is when someone: (1) conducts a scientific experiment, producing empirical results; (2) strains that empirical data through a lens of interpretation – a philosophical lens that requires philosophical defense or refutation – to produce what is ultimately more a philosophical claim than a purely scientific one; and then (3) pretends that this resulting claim involves no philosophy at all, and thus needs no philosophical defense, but has the full backing weight of the authority of Science Itself behind it, and is thus beyond any further argument. In this way, the fallacy of “scientism” allows philosophical premises to get smuggled in past security, and then pretends that the truth of those premises was thereby proven.
Nowhere do we find a clearer demonstration of this fallacy than in the experiments which are claimed to have “empirically” disproved the possibility of metaphysical free will.
Now, the supposedly ‘scientific’ debunker of free will may have valid philosophical reasons for rejecting the possibility of metaphysical free will’s existence. This possibility I will leave aside for now—the target of my argument is the false notion that his science, in and of itself, proves his philosophical conclusions true. The problem is that this only ever appears to him to be the case because he has interpreted his empirical findings through the filter of some philosophical assumptions, rather than others, in the first place—without owning up to it.
In other words, what makes someone who commits the fallacy of scientism a fraud is that he first claims to be able to convert water into wine, and then when asked to demonstrate this magical ability, quietly pours wine instead of water into his water bottles in the first place (and is oblivious of the fact that he is doing so). When you pour wine into water bottles, it’s no surprise, that after your demonstration of a magical chant, you end up with water bottles full of wine—and it’s no surprise when you filter a scientific finding through philosophical assumptions that after your argument is finished, you end up with something that “justifies” those philosophical assumptions’ truth. In neither case has anything actually been “demonstrated.”
In short, the only reason anyone can think that any “scientific” experiments so far have ever “scientifically” debunked the possibility of free will is because they actually have philosophical reasons for believing free will doesn’t exist which they aren’t owning up to honestly. These reasons may or may not be ultimately defensible, but if someone is trying to tell us that a scientific experiment has settled the question, they are simply smuggling their philosophy in past security illegitimately. In truth, the “scientific” experiments that have been conducted supposedly on the question of free will add nothing to the philosophical debate, and they have done more to distract us from the central questions than anything.
_______ ~.::[༒]::.~ _______
Before continuing, I need to establish what I mean when I talk about “free will.” Specifically, I need to make it clear that despite the protests that may come from some, I am going to talk about the sort of “free will” that says that right up until the moment in which I make a free conscious decision, nothing in the previous physical state of the Universe determines what my choice is going to be; and at the moment in which I make my choice, I determine what that decision will be.
Whether or not this is the sort of “free will” that most of us feel as if we experience is an empirical question. And the question of whether this is how our conscious experiences feel is separate from the question of whether we actually do have this type of freedom.
The term for views which admit that this is the kind of freedom that we feel as if we have is “incompatibilism”. “Compatibilists”, by contrast, argue that the only kind of “freedom” that we either do want, or should want, is the kind of “freedom” involved when I choose to do what I want to do because I want to do it; and not, say, because someone is holding a gun to my head—even if my decision and my desire were absolutely set in stone and determined all the way back at the moment of the Big Bang, like ever so many falling dominoes.
While compatibilism basically names a single homogenous position on the question of free will, “incompatibilists” are split into two enemy camps: those who believe that we do have this significant kind of freedom (called “libertarians”), and those who believe we do not (called “hard determinists”).
In my view, while hard determinists are at least honest about the fact that their claim has reason to be unsettling to many ordinary people (because we do feel as if we have the power to make determining choices that are not, themselves, determined, and something about how we see what it means to be human will in fact be disturbed if this is all just one big illusion) and are willing to step up to the plate and argue that the consequences are worth it, “compatibilists” are simply hard determinists who try to weasel out of owning up to and defending themselves in light of these consequences by ignorantly denying—against the protests of anyone who claims otherwise—that anyone cares about the kind of freedom that would come from being able to make “metaphysically free” decisions at all.
The very fact that libertarians and hard determinists exist is all it actually takes to prove the compatibilists wrong: how can you claim that nobody really cares about the libertarian sort of free will when both people who agree with your underlying determinism and people who don’t are telling you that, as a matter of fact, they do care about it?
If that straightforward reasoning wasn’t enough, empirical investigations seem to have settled the question of whether this is how people feel once and for all. In the 2010 study Is Belief in Free Will a Cultural Universal, Sarkissian and colleagues examined ordinary peoples’ “intuitions about free will and moral responsibility in subjects from the United States, Hong Kong, India and Colombia.” Their results proved conclusively that outside of the isolated halls of philosophy departments, the “compatibilist” take that no one cares whether their choices are determined or not is not the norm: “The results revealed a striking degree of cross–cultural convergence. In all four cultural groups, the majority of participants said that (a) our universe is indeterministic and (b) moral responsibility is not compatible with determinism….” Sarkissian concludes that this research reveals “fundamental truth(s) about the way people think about human freedom.”
Again, a hard determinist can describe the way that our conscious experience of decision–making feels just as clearly as accurately and honestly as any libertarian, even as he turns around to deny that we actually have the kind of freedom we feel as if we have. In Free Will and Consciousness: A Determinist Account of the Illusion of Free Will, Gregg D. Caruso writes: “[C]ompatibilists cannot simply neglect or dismiss the nature of agentive experience. … [O]ur phenomenology is rather definitive. From a first–person point of view, we feel as though we are self–determining agents who are capable of acting counter–causally. … (W)e all experience, as Galen Strawson puts it, a sense of “radical, absolute, buckstopping up–to–me–ness in choice and actions”. … When I perform a voluntary act, like reaching out to pick up my coffe mug, I feel as though it is I, myself, that causes the motion. We feel as though we are self–moving beings that are causally undetermined by antecedent events.”
So why does Caruso conclude that things cannot be as they seem? Quoting from a review, the problem with belief in free will is that it is “committed to a dualist picture of the self. … [And it, therefore,] involves a violation of physical causal closure (pp. 29-42).”
In other words, the argument that free will is impossible rests on the claim that defending a dualistic view of consciousness in general is impossible. Notice that this is ultimately a philosophical argument, and not one that is supposed to be proven as the direct conclusion of a scientific study. In fact, Caruso begins addressing these considerations as early as page 15, while he doesn’t begin to mention the scientific studies which are supposed to have addressed the subject until somewhere past page 100. Caruso’s account is one in which someone cannot believe in free will “without embarassment” because believing in it would require “giving up … atomistic physicalism”.
As usual, the advocates of “atomistic physicalism” make no attempt to shoulder the burden of demonstrating that the hypothesis that human conscious experience is composed of nothing other than blind atoms which themselves lack conscious experience and act blindly only as a passive response to inert causes could even conceivably be capable of allowing human conscious experience to be what it is—to put it in my terms, their claim is the equivalent of claiming one can draw a three dimensional figure on a two dimensional board. Instead, they are content to just demand that one can’t possibly deny that hypothesis “without embarassment” and then chop off anything about the nature our experiences which that hypothesis isn’t capable of explaining—no matter how debased and absurd the resulting picture of what it means to be a human being becomes.
Yet, as we’ve seen, the things we would have to chop off to make that hypothesis work end up including everything—because conscious experience quite simply couldn’t exist in the way that it irrefutably does if the “atomistic physicalist” were correct that the Universe at its root is made out of blind particles and forces, and nothing else, in exactly the same way that three–dimensional objects couldn’t exist if the world were a two–dimensional sheet.
My contention is that the only sane position one can hold is that consciousness itself is one of the things that the Universe is composed of “at its root” as well, and that we are free to posit that consciousness simply possesses properties like experientiality and intentionality as basic elements of what consciousness is, in exactly the same way that we are free to posit that electrons simply possess properties like spin and charge as basic elements of what an electron is—with no need of further explanation. All supposed ‘explanations’, after all, must stop somewhere. On the contrary, it is the “atomistic physicalist” who should be embarrassed to put forward the claim that one could even conceivably get qualitative subjective experiences, or intentionality, out of blind building blocks wholly lacking in either quality.
The existence of free will, unlike these, can at least coherently be denied in theory. But the arguments for throwing out the possibility of free will are identical to the arguments for throwing out intentionality, or subjective experience—and the existence of these features of consciously experienced reality can’t be denied without blatant incoherency. Thus, the arguments used to deny the possibility of the existence of free will fail even if they do not fail in the specific case of free will itself—and there remains no absolutist reason to deny the possibility that metaphysical free will could exist after all. The only remaining question, then, is whether further considerations happen to rule out the existence of human free will specifically.
_______ ~.::[༒]::.~ _______
The story of supposedly “scientific” refutation of the possibility of free will begins in the 1980’s with a series of studies conducted by Benjamin Libet. Though now more than three decades old, these experiments still constitute the bulk of “scientific” analysis of the implausibility of free will.
In Sam Harris’ 2012 book Free Will, he writes:
“The physiologist Benjamin Libet famously used EEG to show that activity in the brain’s motor cortex can be detected some 300 milliseconds before a person feels that he has decided to move. Another lab extended this work using functional magnetic resonance imaging (fMRI): Subjects were asked to press one of two buttons while watching a “clock” composed of a random sequence of letters appearing on the screen. They reported which letter was visible at the moment they decided to press one button or the other. . . . One fact now seems indisputable: Some moments before you are aware of what you will do next—a time in which you subjectively appear to have complete freedom to behave however you please—your brain has already determined what you will do. You then become conscious of this “decision” and believe that you are in the process of making it.”
Daniel Wegner is one of the most prominent social psychologists known for his continuation of experiments aiming to prove this general sort of idea. In his discussion of Libet’s experiments in the 2002 The Illusion of Conscious Will, he explains the picture of the conscious mind’s role in reality that he still believes the Libet experiments are able to prove:
“Does the compass steer the ship? … [not] in any physical sense. The needle is just gliding around in the compass housing, doing no actual steering at all. It is thus tempting to relegate the little magnetic pointer to the class of epiphenomena — things that don’t really matter in determining where the ship will go. Conscious will is the mind’s compass.”
In other words, determinists who agree with Harris and Wegner believe that preceding unconscious brain events are the cause of both our future behaviors, and our later, illusory feeling of “choosing” those behaviors. It isn’t just that our experiences of choice are determined; it’s that they’re completely superfluous to the chain of events that even lead to the actual execution of action—to them, the brain activity that can be spotted 300ms before you “decide” to flick your wrist in Libet’s experiment would cause you to flick your wrist, even if it didn’t cause you to feel like you were “deciding” to flick your wrist as an incidental step along the path towards that destination. To them, it isn’t just that our will isn’t “free” when it causes our actions—it’s that our will doesn’t cause our actions at all.
_______ ~.::[༒]::.~ _______
If anyone should allow himself to really sink down in to reinterpreting his moment–by–moment experiences in light of this idea, he will soon realize that it is an excellent recipe for producing the pathological state known as depersonalization. Indeed, according to these people, what a dysfunctional person in the depersonalized state experiences is actually a far closer reflection of reality than what the rest of us experience all the rest of the time. I think we should keep it very clearly in mind that what is at stake here is whether or not science has proven that a pathological state that tends to come comorbid with other pathologies like major depression and schizophrenia reveals fundamental truths about the reality of human consciousness that the rest of us live in illusory denial of.
To repeat the explanation in my words, the Libet–type experiments first have a subject sit down in front of a clock, while hooked up to an EEG (or fMRI). Then, they explicitly instruct that subject to perform some simple motor activity at random. Absolutely nothing is at stake in the decision; there is no goal to achieve, there are no values or variables to weigh or choose between, and no number of button presses or wrist–flicks is too high or too low. There is no way to “win,” there is no way to “fail,” and there are no alternative outcomes in the experiment for the subject to pick between. With absolutely no goals or constraints, subjects in these experiments are told to sit back and perform a perfectly purposeless motion at random for which they have absolutely no reason in principle to choose one moment over another.
Stop right there.
Keep this fact very clearly in mind: we’re using this study to evaluate free will.
Now, ask yourself: does this sort of scenario even seem relevant at all to free will?
Let’s get back into the first–person position on these experiments.
If you agree to join in Libet’s experiment, what are you going to feel?
Imagine I have just told you to repeat Libet’s experiment—that I’ve just said to you: “I want you to sit back, and whenever you feel like it, I want you to flip your wrist over. Then, I want you to do it again. And keep doing it until I tell you to stop.”
What is that going to feel like?
It is immediately obvious that this does not even feel like an exercise of free will.
In fact, it may have felt like an exercise of free will to decide whether or not to join Libet’s experiment at all, or else spend my day doing something else instead. But once I’ve sat down and consented to follow Libet’s instructions, what does my mental activity consist of?
It consists, primarily, of waiting. For what? An urge to move my hand.
To do what? To appear.
In other words, when I sit down and consent to follow Libet’s instructions, I have already made the conscious decision to place myself into a specific, and very peculiar, state of consciousness. I have cleared my mind. I am focusing all of my conscious attention onto my hand. And it is as if I’ve consciously chosen to initiate an automated “program” which orders my subconscious to generate the sensation of an urge to move—at random—while simultaneously holding the intention to act on that sensation, after it appears. I have made the decision to set myself into this state of consciousness, and I am actively holding myself in it for the purposes of this experiment.
Is it not precisely part of my very experience itself that in a case like this, a sensation that feels like a spontaneous “urge” does in fact appear before I make the decision to move?
Of course it is.
So is it any surprise at all to find that brain activity of some sort can be found flickering prior to the time at which I consciously register making the decision to flip my wrist? I don’t think it is. In fact, I think generalizing from a case like this to the conclusion that our decisions in general are determined by subconscious processes before we ever feel as if we’re deciding to make them is downright goddamn idiotic. Sheer introspection alone leads us to expect that we would see brain activity appear prior to our decision to flip our wrists over, because participating in Libet’s experiment would feel exactly like placing myself in the conscious state of waiting for a particular kind of sensation to surface into my conscious awareness before acting.
Libet’s experiment would feel like that. Ordinary exercises of what we feel to be our free will to decide do not. So the simplest conceptual analysis of what would happen in an experiment like the one Libet designed is already enough to establish that these experiments quite simply have no bearing on the matter of free will at all.
So here is the crux: when the Libet study’s interpretors decide to label the preceding brain activity as “the subject’s soon–to–be ‘consciously willed’ decision in a deterministic process of turning into a “decision” under the surface outside of the subject’s conscious mind” rather than “the urge the subject has consciously ordered his subconscious to randomly generate appearing exactly as cued”, that is not science. That is, in fact, philosophical, in that it makes a call about how to bridge subjective aspects of our first–person experience with outward results of third–person observation which cannot be traveled by empirical investigation unaided.
And not only is it a philosophical call—it’s a bad one.
But the fallacy of scientism goes so unchallenged by the modern mind that for the most part, few people commenting on the Libet experiments have noticed even something that should have been this simple and basic and rudimentary and obvious a hell of a long time ago.
_______ ~.::[༒]::.~ _______
There are plenty of other disqualifying technical problems with Libet’s experiment, besides. For example, Libet was able to determine that the “readiness potential” preceded the decision to act because he programmed a computer to record the preceding few seconds of brain activity in response to a subject’s muscle activity. In other words, from the very first moment, he never had a damn clue how often “readiness potentials” appeared and did not trigger muscle movement, because what Libet did not do is keep a continuous record of their brain activity, to prove that a “readiness potential” always produced movement; rather, that activity was only recorded in retrospect, when the subject actually moved, and at no other time.
Further studies have made it clear that this was, in fact, a significant problem for Libet’s conclusions: in 2015, a team led by Prof. Dr. John-Dylan Haynes created a video game that would have a subject face off against a computer enemy which was programmed to react in advance to the intention to move as indicated by the human player’s “readiness potentials” (Point of no return in vetoing self-initiated movements). If “readiness potentials” were deterministic, the computer would always be able to predict the human player’s movements in advance and would therefore always win. If they weren’t, then the human player would be able to adapt to the computer’s pre–emptive response by changing his plan mid–course.
And, in fact, that was what the team found.
“A person’s decisions are not at the mercy of unconscious and early brain waves. They are able to actively intervene in the decision-making process and interrupt a movement,” says Prof. Haynes. “Previously people have used the preparatory brain signals to argue against free will. Our study now shows that the freedom is much less limited than previously thought.”
Here’s another problem: in the Libet experiments, the “readiness potential” appeared 550ms (just over half a second) before muscle movement. But here’s what happens if you tell someone to perform a physical action in reaction to a sound: it only takes 230ms, per Haggard and Magno 1999, for someone to decide to perform an action in response to a cue. We therefore know that conscious decisions can be made in less than a quarter of a second. And if conscious decisions can be made in less than a quarter of a second, what basis do we have to assume that something happening a whole half of a second before a decision is made in some other cases is the neurological determinant of the decision itself?
But what’s interesting about these problems is that they would all be entirely unnecessary to go to the trouble to even explore in the first place if anyone had simply paid closer attention to analyzing the notion of Libet’s study design conceptually—a simple momentof clarification of some of the most basic philosophical issues at play in an experiment designed like this could have saved us a lot of wasted time. It would have been clear from the outset what was probably going on.
_______ ~.::[༒]::.~ _______
In the decades since Libet’s original work, has better evidence come along to support his conclusions? Sam Harris immediately followed up with a statement about Libet with a description of “another lab [that] extended this work using functional magnetic resonance imaging (fMRI)….” The lab he refers to is Chun Siong Soon’s, and the summary of the 2008 study published in Nature Neuroscience can be seen here.
While the activity measured in this study was still, as before, purposeless, with no goals or constraints, it did change one substantial thing. According to the way Soon (et al.) summarized their own research—in a summary paper titled “Unconscious Determinants of Free Decisions in the Brain”—
“There has been a long controversy as to whether subjectively ‘free’ decisions are determined by brain activity ahead of time. We found that the outcome of a decision can be encoded in brain activity of prefrontal and parietal cortex up to 10 s before it enters awareness.”
The actual point this new study was supposed to add to the already–existent debate was that it was supposed to establish the capacity of these scientific measurements to predict not just the general timing of a single choice, but now in fact which of two—count them, two!—equally meaningless choices the subject would choose between. And the conclusions we are supposed to draw from this are, again, wide–reaching—returning to the summary from Harris:
“One fact now seems indisputable: Some moments before you are aware of what you will do next—a time in which you subjectively appear to have complete freedom to behave however you please—your brain has already determined what you will do. You (only) then become conscious of this “decision” and believe (falsely) that you (“you”) are in the process of making it.”
What do the particular new facts drawn by this study really add to the picture?
There is one thing that neither Harris’ reference to this study, nor Soon (et al.)’s own summary of it in Nature Neuroscience, will clearly tell you—quoting Alfred Mele:
“ … the predictions are accurate only 60 percent of the time. Using a coin, I can predict with 50–percent accuracy which button a participant will press next. And if the person agrees not to press a button for a minute (or an hour), I can make my predictions a minute (or an hour) in advance. I come out 10 points worse in accuracy, but I win big in terms of time. So what is indicated by the neural activity that Soon and colleagues measured? My money is on a slight unconscious bias toward a particular button—a bias that may give the participant about a 60–percent chance of pressing that button next.”
Notably, this 60–percent figure is a drop from a predictive value of 80–90% in cases where the moment chosen to commit a single predefined action like Libet’s wrist–rotating is what is being predicted. Even with the increased understanding of neurophysiology developed over the past handful of decades, and even with refined neuroimaging techniques, the predictive power of the “readiness potential” in this study still immediately drops by 20%—down to little over chance*—with even a slight shift of the design of the experiment towards something that comes just ever so marginally closer to resembling the kinds of decisions in which we actually deliberate—and feel as if we deliberate freely—over a choice. (*Remember, you’d have about 50% accuracy if you were just guessing, so 60% is even less impressive than it sounds at a glance, because you should be comparing that 60% accuracy to a baseline of 50%)
But yet again, even if the predictive value of the “readiness potential” in these expanded cases were 100%, why should even that have concerned me? When I go into Soon’s laboratory, I am walking in deliberately setting the conscious intention in advance to sit back and think about nothing other than letting myself push either one or the other button at random. Absolutely nothing weighs on the decision; I am by definition putting myself in the peculiar conscious state of waiting to act on a random urge which I have no reason for caring about. Even with this meaningless “choice” between two absolutely meaningless options added to the scenario, it doesn’t even feel like the kind of deliberation in which I feel as though I possess the power to do otherwise. In the case of Soon’s experiment, just like Libet’s, participating would feel exactly like waiting for some sensation to rise up into conscious awareness out of my subconscious, at which point I have already set the intention to act on it when—meaning after—it appears.
So even a study design like Soon’s would have nothing to say about free will even if it found that it could predict my decision 100% of the time (because perhaps all the brain scans are identifying is the appearance of the impulse–sensation that I’ve walked into Soon’s lab agreeing to sit and wait for). But the meager results of these studies turn out to be even less impressive than that. By far.
_______ ~.::[༒]::.~ _______
As I said in the opening chapter of this series,
At these stages of argument, it should not be mistaken that I am ever arguing that the reason we should reject a physicalist account is just because it dehumanizes us (in the sense of “making us feel dehumanized,” or at least being something which arguably should). Rather, if a physicalist account should be rejected, it should be rejected first and foremost because it either explicitly denies, or else by failing to be able to account for them implicitly denies some parts of what we really, truly, in fact and in reality, actually are. However, an intrinsically connected component piece of this picture is that if an account does explicitly or implicitly deny some aspect of what we really are, then believing an objectively impoverished account of the world may lend itself to a subjectively impoverished internal or relational life.
Believing in the claim of solipsism, for example (e.g., that my subjective experience is the only one that truly exists in the world, whereas everyone else is something like a figment of my imagination, lacking actual internal experiences completely, so that life is quite like a computer game in which everyone else is artificially computer generated while I am the only actual player) would—first and foremost—be a philosophical mistake. However, we would be justified to oppose that mistake both because of the objective, abstract errors that it commits as well as, simultaneously the internal, emotional, and social consequences that would likely result from someone’s believing it: the two are, in other words, not necessarily separable—solipsism would have these consequences because of its mistakes, and those mistakes are important because of the consequences. Where arguments for the socially or psychologically detrimental consequences of physicalist accounts are made, they should not be mistaken for emotional appeals to consequences which simply argue that we must believe these accounts are false because we shouldn’t want them to be true; we have (so I will claim) all the demonstrable reasons for believing them false we should need. But if accounts of the world and the self are factually impoverished, they will arguably lead to an impoverished relationship to the world and to the self and others in consequence, and we can oppose them for both reasons at the same time.
The point extends into our present discussion of free will.
Not only is it the case, as previously noted, that the majority of respondents from the United States to India to Colombia believe that “moral responsibility is not compatible with determinism”; it actually has been recorded repeatedly that altering someone’s belief in free will impacts their moral behavior.
In 2008, Kathleen D. Vohs and Jonathan W. Schooler found that prompting participants with a passage from The Astonishing Hypothesis (in which the researcher Francis Crick writes, “You, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. Who you are is nothing but a pack of neurons.”) made them significantly more likely to cheat on a math test.
In their first experiment, “cheating” involved failure to press the space bar on a keyboard at an appropriate time—so in order to rule out the possibility that disbelief in free will simply made participants more passive in general, they conducted a second experiment in which “cheating” would involve active behavior (namely, overpaying themselves for providing correct answers to a multiple choice test). Going even further, the second experiment also tested the impacts of increasing participants’ belief in free will. And once again, those whose belief in free will was strengthened cheated less, while those whose belief in free will was undermined cheated more.
In 2009, Roy F. Baumeister and colleagues expanded this line of research further. In a first experiment, participants were presented with hypothetical scenarios and asked how they felt about helping individuals described as being in need—and those who were prompted with disbelief in free will were significantly less likely to help. The second experiment offered participants a description of a fellow student whose parents had just been killed in a car accident, and then presented them with an actual opportunity to volunteer to help—those who were prompted with disbelief in free will were still significantly less likely to volunteer here even when the situation actually became real.
Finally, participants in the third experiment were told they were helping the experimenter prepare a taste test to be consumed by an anonymous stranger while being given a list of foods the stranger liked and disliked. This list explained that the stranger hated hot foods most of all—and participants, after being sorted into groups prompted with various beliefs about free will, were judged according to how much hot sauce they poured onto the stranger’s crackers. Participants who were told that free will doesn’t exist before the experiment gave the taste–tester twice as much hot sauce as those who read passages supporting the ideas of free choice and moral responsibility.
Jonathan Schooler, writing in Free Will and Consciousness: How Might They Work? explains:
“One possibility is that reflecting on the notion that free will does not exist is a depressing activity, and that the results are simply the consequence of increased negative affect. However, both Vohs and Schooler and Baumeister et al. assessed mood and found no impact of the anti–free will statements on mood, and no relationship between mood and prosocial behavior. … Baumeister et al. argue that the absence of an impact of anti–free will sentiments on participants’ reported accountability and personal agency argues against a role of either of these constructs in mediating the relationship between endorsing anti–free will statements and prosocial behavior. … [But] just as priming achievement–oriented goals can influence participants’ tacit sense of achievement without them explicitly realizing it (Bargh, 2005), so too might discouraging a belief in free will tacitly minimize individuals’ sense of accountability or agency, without people explicitly realizing this change.”
And so, as an empirical matter of fact, what happens when you give people an ideological license to loosen their senses of accountability and agency, they find excuses to be assholes.
“ … We are always ready to take refuge in a belief in determinism if [our] freedom weighs upon us
or if we need an excuse.” — Jean–Paul Sartre
_______ ~.::[༒]::.~ _______
A bizarre series of intellectual double standards underlines the equivalent attempt to defend the value of spreading belief in determinism. Determinists have long rested on the supposed immorality of retribution to stake their claim that spreading belief in determinism should help create a more “ethical” world. As the story goes, we only want to see someone who commits a moral offense suffer for the sake of suffering because we believe that they “freely chose” to act as they did. Supposing someone commits a public act of violent rape, then if we assume that he was beyond all capacity for control of his impulses, we’ll want to help him not do that again instead of punish him. Thus, many liberals hope that spreading belief in determinism would help create a public consensus for shifting the motivations upon which the criminal justice system is centered away from retribution, and towards rehabilitation instead.
But why should that follow? If the violent criminal is without any deep moral form of guilt for his act because he has no deep moral responsibility for anything at all, then I too am without any deep moral form of guilt when I desire to see him violently punished for it—I hold no deep moral responsibilities for my actions or desires either, after all, so why shouldn’t I “excuse” myself for wanting to see him severely punished in just exactly the way that I “excuse” him for his act of rape? The determinist can give no reason—or at least not one that actually requires belief in metaphysical determinism.
In The Atheist’s Guide to the Universe, Alex Rosenberg argues that “the denial of free will is bound to make the consistent thinker sympathetic to a left–wing, egalitarian agenda about the treatment of criminals and of billionaires.” But why should it do that? Naively, Rosenberg thinks that if we conclude that criminals do not deserve to suffer and that billionaires do not deserve to reap the benefits of wealth because there is no such thing as “deserving” in the moral sense because there is no such thing as free will, then it follows that we will want to be nice to criminals and redistribute the wealth of billionaires.
What’s overlooked in this is that if there is no such thing as “deserving”, then criminals do not “deserve” to remain free in the society they’ve committed harms against any more than they “deserve” to be punished by it. It’s not as if the fact that they don’t “deserve” to be punished entails that they do “deserve” not to be, because when we eliminate the entire concept of “deserving” by eliminating free will, we aren’t objecting to one isolated claim that someone in a particular circumstance deserves a particular thing; we’re eliminating all such claims. Likewise, if determinism is true, then billionaires may not “deserve” their wealth; but they also do not “deserve” to have their wealth taken away from them, and the general public does not “deserve” to have the wealth that billionaires have created given to them either. Only if free will does exist—and there are some things that individuals hold more or less responsibility for—in differing degrees in different cases—can we reasonably talk about who “deserves” what at all.
Finally, Sam Harris makes the rather utopian claim that promoting belief in determinism should allow us to rid the world of hatred entirely. And in response to those who “say that if cutting through the illusion of free will undermines hatred, it must undermine love as well”, he responds:
“Seeing through the illusion of free will does not undercut the reality of love … loving other people is not a matter of fixating on the underlying causes of their behavior. Rather, it is a matter of caring about them as people and enjoying their company. We want those we love to be happy, and we want to feel the way we feel in their presence.
But hatred, he says, in contrast,
is powerfully governed by the illusion that those we hate could (and should) behave differently. We don’t hate storms, avalanches, mosquitoes, or flu. We might use the term “hatred” to describe our aversion to the suffering these things cause us—but we are prone to hate other human beings in a very different sense. True hatred requires that we view our enemy as the ultimate author of his thoughts and actions. Love demands only that we care about our friends and find happiness in their company.”
Wait a second.
Couldn’t everything Harris just said to justify his claim about hatred apply to love, too?
In fact, we could reverse everything that Harris just said about both love and hatred, and his statements would seem exactly as “rational” as they did before. Consider how it would sound:
“Hating other people is not a matter of fixating on the underlying causes of their behavior. Rather, it is a matter of not caring about them as people and not enjoying their company. We want those we hate to be unhappy if we can’t avoid their loathsome presence.
But love? Love is powerfully governed by the feeling that those we love choose to be who they are. We don’t love ice cream, video games, mosquitoes, or getting over a flu. We might use the term “love” to describe our attraction to the pleasure these things cause us—but true personal love goes deeper in a very significant way. True love requires that we view those we love as the ultimate author of their thoughts and actions. Hatred demands only that we feel the fleeting desire to cause someone unhappiness.”
I think it is clear that the half of Harris’ argument that should be granted is that belief in free will is necessary in order to “truly hate.” However, just as Harris’ distinction between true hatred and hyperbolic ‘hatred’ holds, so does a distinction between true love and hyperbolic ‘love.’ And just as Harris’ determinism only allows room for hyperbolic ‘hatred’ but not the “real” kind, so it only allows room for hyperbolic ‘love’—where the sense in which I “love” my wife is no different in kind from the sense in which I “love” owning a new pair of pants or buying a new iPod. And as Dan Jones writes, the same necessarily goes for principles like forgiveness and gratitude:
“Harris believes that true hatred — the kind we direct towards evildoers, as opposed to mere dislike — implies an untenable view of human behaviour, in that it depends on an incoherent concept of free will. The same must go for forgiveness. It would be daft to talk of forgiving a mountain for an avalanche, but for Harris it must be equally daft to talk of true forgiveness among humans — for what is there to forgive in a deterministic system, whether a mountain or human?
The same goes for gratitude. You might be thankful that a mountain provided good slopes for skiing one day, but that’s not the true gratitude you show to your friend for teaching you how to ski in the first place. This true gratitude must too fall beneath Harris’s deterministic sword: what is there to thank in a deterministic system, mountain or human?”
_______ ~.::[༒]::.~ _______
However, there is an even more fundamental issue left to discuss.
The physicalist’s claim that we should accept the social value of spreading belief in determinism is actually destroyed on an even more meaningfully deep level by the fact that if physicalism were true, it would be incoherent to say that our beliefs ever impact our behavior at all. The only paradigm that can even accommodate the notion that beliefs, as such, could possibly hold their own independent impact on our behavior is one that gives consciousness itself an independent causal role in behavior.
This is because, on physicalism, there are precisely three possible answers (or pseudo–answers) for explaining the relationship between my consciously held “belief” and whatever physical properties of my brain most closely correlate with changes in my consciously held “beliefs”: identity theory, epiphenomenalism, and eliminativism.
Eliminativism would say that there are, in fact, no such thing as “beliefs” at all—there are only physical systems linked up in such a way that when this one part moves this way, it causes that part to move that way in a sheer physical series of causative events. Recall the statement from Alex Rosenberg we explored the implications of in the entry on intentionality:
Suppose someone asks you, “What is the capital of France?” Into consciousness comes the thought that Paris is the capital of France. Consciousness tells you in no uncertain terms what the content of your thought is, what your thought is about. It’s about the statement that Paris is the capital of France. That’s the thought you are thinking. It just can’t be denied. You can’t be wrong about the content of your thought. You may be wrong about whether Paris is really the capital of France.
The French assembly could have moved the capital to Bordeaux this morning (they did it one morning in June 1940). You might even be wrong about whether you are thinking about Paris, confusing it momentarily with London. What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.
It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all. The brain can’t have thoughts about Paris, or about France, or about capitals, or about anything else for that matter. When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong.
Don’t misunderstand, no one denies that the brain receives, stores, and transmits information. But it can’t do these things in anything remotely like the way introspection tells us it does—by having thoughts about things. The way the brain deals with information is totally different from the way introspection tells us it does. Seeing why and understanding how the brain does the work that consciousness gets so wrong is the key to answering all the rest of the questions that keep us awake at night worrying over the mind, the self, the soul, the person.
We believe that Paris is the capital of France. So, somewhere in our brain is stored the proposition, the statement, the sentence, idea, notion, thought, or whatever, that Paris is the capital of France. It has to be inscribed, represented, recorded, registered, somehow encoded in neural connections, right? Somewhere in my brain there have to be dozens or hundreds or thousands or millions of neurons wired together to store the thought that Paris is the capital of France. Let’s call this wired-up network of neurons inside my head the “Paris neurons,” since they are about Paris, among other things. They are also about France, about being a capital city, and about the fact that Paris is the capital of France. But for simplicity’s sake let’s just focus on the fact that the thought is about Paris.
Now, here is the question we’ll try to answer: What makes the Paris neurons a set of neurons that is about Paris; what make them refer to Paris, to denote, name, point to, pick out Paris? To make it really clear what question is being asked here, let’s lay it out with mind-numbing explicitness: I am thinking about Paris right now, and I am in Sydney, Australia. So there are some neurons located at latitude 33.87 degrees south and longitude 151.21 degrees east (Sydney’s coordinates), and they are about a city on the other side of the globe, located at latitude 48.50 degrees north and 2.20 degrees east (Paris’s coordinates).
Let’s put it even more plainly: Here in Sydney there is a chunk or a clump of organic matter—a bit of wet stuff, gray porridge, brain cells, neurons wired together inside my skull. And there is another much bigger chunk of stuff 10,533 miles, or 16,951 kilometers, away from the first chunk of matter. This second chunk of stuff includes the Eiffel Tower, the Arc de Triomphe, Notre Dame, the Louvre Museum, and all the streets, parks, buildings, sewers, and metros around them. The first clump of matter, the bit of wet stuff in my brain, the Paris neurons, is about the second chunk of matter, the much greater quantity of diverse kinds of stuff that make up Paris. How can the first clump—the Paris neurons in my brain—be about, denote, refer to, name, represent, or otherwise point to the second clump—the agglomeration of Paris? A more general version of this question is this: How can one clump of stuff anywhere in the universe be “about” some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?
But whether Rosenberg can incorporate it into his theory or not, that our thoughts are “about” concepts and ideas is the one thing we can’t deny. If the notion that the world is nothing but “chunks of matter” is a notion that can’t account for the fact that this is so, then it is that notion, and not our belief that we have thoughts “about” things, that must go. (Again, I elaborate on this further in entry V).
The next approach a physicalist might attempt is identity theory. For us to be able to differentiate an identity theory about beliefs with an eliminativist perspective, this perspective would have to grant that our thoughts and mental states are “about” things—but that they are also identical to certain chunks of matter, nonetheless.
The first problem with that style of approach is this: everything Rosenberg just said is true—he has correctly reasoned from his opening premises. If everything is just “chunks of matter”, then it is incoherent that one “chunk of matter” could be “about” some other “chunk of matter” in some other part of the universe. And as we also saw in the entry on intentionality, the project of “building” the intentionality of the conscious human mind out of any sort of proto–intentionality just fails; there’s no way, even in principle, to do it. You can’t cross that bridge by steps any more than you can cross the bridge from drawing on a two–dimensional canvas to creating a three–dimensional figure by a series of steps of lines drawn on that canvas—and you don’t have to spend eternity testing every possible pattern of lines to figure this out; if you pay attention closely enough, you should be able to see that this is impossible in principle. But it can help to draw a few case studies of what some of the attempts have looked like in order to gain a closer intuitive grasp on where the bridge is that can’t be crossed—as, again, we saw in entry V.
The second problem, which is ultimately just the first approached from the opposite side of the same gap, is one we can see with a thought experiment originally presented by Laurence BonJour. As he wrote:
Suppose then that on a particular occasion I am thinking about a certain species of animal, say dogs — not some specific dog, just dogs in general (but I mean domestic dogs, specifically, not dogs in the generic sense that includes wolves and coyotes). The Martian scientist is present and has his usual complete knowledge of my neurophysiological state. Can he tell on that basis alone what I am thinking about? Can he tell that I am thinking about dogs rather than about cats or radishes or typewriters or free will or nothing at all? It is surely far from obvious how he might do this. My suggestion is that he cannot, that no knowledge of the complexities of my neurophysiological state will enable him to pick out that specific content in the logically tight way required, and hence that physicalism is once again clearly shown to be false.
[. . .]
Suppose then, as seems undeniable, that when I am thinking about dogs, my state of mind has a definite internal or intrinsic albeit somewhat indeterminate content, perhaps roughly the idea of a medium-sized hairy animal of a distinctive shape, behaving in characteristic ways. Is there any plausible way in which, contrary to my earlier suggestion, the Martian scientist might come to know this content on the basis of his neurophysiological knowledge of me? As with the earlier instance of the argument, we may set aside issues that are here irrelevant (though they may well have an independent significance of their own) by supposing that the Martian scientist has an independent grasp of a conception of dogs that is essentially the same as mine, so that he is able to formulate to himself, as one possibility among many, that I am thinking about dogs, thus conceived. We may also suppose that he has isolated the particular neurophysiological state that either is or is correlated with my thought about dogs. Is there any way that he can get further than this?
The problem is essentially the same as before. The Martian will know a lot of structural facts about the state in question, together with causal and structural facts about its relations to other such states. But it is clear that the various ingredients of my conception of dogs (such as the ideas of hairiness, of barking, and so on) will not be explicitly present in the neurophysiological account, and extremely implausible to think that they will be definable on the basis of neurophysiological concepts. Thus, it would seem, there is no way that the neurophysiological account can logically compel the conclusion that I am thinking about dogs to the exclusion of other alternatives.
[. . .]
Thus the idea that the Martian scientist would be able to determine the intrinsic or internal contents of my thought on the basis of the structural relations between my neurophysiological states is extremely implausible, and I can think of no other approach to this issue that does any better. The indicated conclusion, once again, is that the physical account leaves out a fundamental aspect of our mental lives, and hence that physicalism is false.
As Bill Vallicella summarizes the argument,
BonJour is thinking about dogs. He needn’t be thinking about any particular dog; he might just be thinking about getting a dog, which of course does not entail that there is some particular dog, Kramer say, that he is thinking about getting. Indeed, one can think about getting a dog that is distinct from every dog presently in existence! How? By thinking about having a dog breeder do his thing. If a woman tells her husband that she wants a baby, more likely than not, she is not telling him that she wants to kidnap or adopt some existing baby, but that she wants the two of them it engage in the sorts of conjugal activities that can be expected to cause a baby to exist.
BonJour’s thinking has intentional content. It exhibits that aboutness or of-ness that recent posts have been hammering away at. The question is whether the Martian scientist can determine what that content is by monitoring BonJour’s neural states during the period of time he is thinking about dogs. The content before BonJour’s mind has various subcontents: hairy critter, mammal, barking animal, man’s best friend . . . . But none of this content will be discernible to the neuroscientist on the basis of complete knowledge of the neural states, their relations to each other and to sensory input and behavioral output. Therefore, there is more to the mind than what can be known by even a completed neuroscience.
So whatever the relationship between ‘beliefs’ as I consciously experience them and the physical state of my brain might be—however close that relationship might be—it is just flatly incoherent to claim that the two things are “identical” (for even more on that, see here). We can see that whether we conceptually analyze what it means for something to be a belief, and then reason backwards to see whether something with those attributes could be built out of something possessing only the kinds of attributes that blind physical forces do (this is how Rosenberg arrives at the, er, belief that beliefs do not exist), or we approach the divide from the opposite direction and imagine ourselves looking into the physical dimensions of the activity of the brain in the attempt to find an ‘idea’.
And that leaves just one final option remaining for the physicalist: epiphenomenalism. But epiphenomenalism about beliefs fails for exactly the same reasons that epiphenomenalism about qualia does: namely, that if it were true, we would necessarily be utterly incapable in principle of forming the concept of epiphenomenalism in the first place. Recall our earlier description of why epiphenomenalism about qualia fails:
One of the easiest ways to explain an epiphenomenalist relationship is by example. If you stand in front of a mirror and jump up and down, your reflection is an epiphenomena of your actual body. What this means is that your body’s jump is what causes your reflection to appear to jump—your body’s jump is what causes your real body to fall—and your body’s fall is what causes your reflection to appear to fall. It may seem to be the case that your reflection’s apparent jump is what causes your reflection to apparently fall, but this is purely an illusion: your reflection doesn’t cause anything in this story; not even its own future states. If we represent physical states with capital letters, states of experience with lower–case letters, and causality with arrows, then a diagram would look something like this:
Thomas Huxley, not the first to espouse the view but the first to give it a name, described it by saying that consciousness is like the steam–whistle sound blowing off of a train that contributes nothing to the continued motion of the train itself. We shouldn’t fail to realize how extreme the dehumanization of this view is, even still, despite the fact that it acknowledges conscious experiences as real: if this is true, then nobody ever chooses a partner because they are experiencing love; nobody ever fights someone because they are experiencing anger; nobody ever even winces because they are experiencing pain. Rather, a blind inert physical state moves by causal necessity from one state to the next; and it is the meaningless motion of these blind inert forces by causal necessity that explains everything—conscious experiences just happen to incidentally squirt out over the top of these motions as a byproduct, and you are, in effect, a prisoner locked inside the movie in your head with your arms and legs removed and absolutely no influence or control whatsoever over what does or does not happen inside of it. In the words of Charles Bonnett writing in 1755, “the soul is a mere spectator of the movements of its body.”
I would ask you to contemplate the severity of what might result if someone were to actually take this proposal seriously and really honestly begin to look at life and their own conscious existence in this horrific and dehumanized way, but according to the claim of epiphenomenalism, believing that epiphenomenalism is true never has any causal effect on anyone’s physical behavior—nor on any of their future mental states—in the first place either. A series of blind, inert physical events leads to their brain responding physically to the input of symbols and lines (and it is only a mere epiphenomena of this that they have any experience of “understanding their meaning,” but any “ideas” contained therein—as such—would simply in principle have no ability to play any further causal role in anything further whatsoever, either of the individual’s future conscious beliefs or their future physical behavior); and from here a purely physical sequence of physical causation leads to further physical states (which then happen to give off more epiphenomena in turn). On this view, the fact that pain even feels “painful” is a mere coincidence; for it is not because we feel pain and dislike it that we ever recoil away from a painful stimuli: one physical brain event produces another, and it is only a mere unexplained coincidence that what the first physical brain event happens to give off like so much irrelevant steam is a feeling that just so happens to be painful in particular.
It literally could just as well have been the case that slicing into our skin with a knife would produce the sensation that we currently know in the world as it is as “the taste of strawberries,” and the physical world (according to epiphenomenalism) would proceed in just exactly the same way as it does now. This would be true because: (1) epiphenomenalism admits that conscious experiences are something over and above physical events, and we do not know why particular conscious experiences are linked with particular physical events (since the former are not logically predictable from the latter given that claims that it “emerges” are acknowledged by definition by epiphenomenalism to fail), and (2) none of them play any causal role in anything anyway. Our conscious lives could have consisted of one long feeling orgasm, or one long miserable experience of pain, or one long sounding “C” note combined with the taste of blueberries and a feeling of slight melancholy, and again, everything in the physical universe would have proceeded in exactly the same way it does now. And it is only a coincidence of whatever extra rule specifies that particular conscious experiences superfluously ‘squirt out’ and dissipate into the cosmic aether like steam that our world happens to be otherwise.
Unfortunately, while most people—including philosophers—are content to stop here and reject the view for sheer counter–intuitiveness alone, philosophy of mind has been somewhat lazy at producing actual logical objections to it. Actual refutations of epiphenomenalism often aren’t very well known, but there is one that is absolute and undeniable and refutes even the possibility that anything like epiphenomenalism could possibly be true completely once and for all. That is: if epiphenomenalism were true, no one would ever be able to write about it. In fact: no one would ever be able to write—nor think—about consciousness in general. No one would ever once in the history of universe have had a single thought about a single one of the questions posed by philosophy of mind. Not a single philosophical position on the nature of consciousness, epiphenomenalist or otherwise, would ever have been defined, believed, or defended by anyone. No one would even be able to think about the fact that conscious experiences exist.
And the reason for that, in retrospect, is quite plain to see: on epiphenomenalism, our thoughts are produced by our physical brains. But our physical brains, in and of themselves, are just machines—our conscious experiences exist, as it were in effect, within another realm, where they are blocked off from having any causal influence on anything whatsoever (even including the other mental states existing within their realm, because it is some physical state which determines every single one of those). But this means that our conscious experiences can never make any sort of causal contact with the brains which produce all our conscious thoughts in the first place. And thus, our brains would have absolutely no capacity to formulate any conception whatsoever of their existence—and since all conscious thoughts are created by brains, we would never experience any conscious thoughts about consciousness. For another diagram, if we represent causality with arrows, causal closure with parentheses, physical events with the letter P and experiences with the letter e, the world would look something like this:
… e1 ⇠ (((P⇆P))) ⇢ e2 …
Everything that happens within the physical world—illustrated by (((P⇆P)))—would be wholly and fully kept and contained within the physical world, where conscious experiences as such do not reside; the physical world is Thomas Huxley’s train which moves whether the whistle on top blows steam or not. And e1 and e2 float off of the physical world—for whatever reason—and then merely dissipate into nothingness like steam, with no capacity in principle for making any causal inroads back into the physical dimension of reality whatsoever. This follows straightforwardly as an inescapable conclusion of the very premises which epiphenomenalism defines itself by. But since the very brains which produce all our experienced thoughts are contained within (((P⇆P))), in order to have any experienced thought about conscious experience itself, these (per epiphenomenalism) would have to be the epiphenomenal byproducts of a brain state that is somehow reflective or indicative of conscious experience. But brain states, again because per epiphenomenalism they belong to the self–contained world inside (((P⇆P))) where no experiences as such exist, are absolutely incapable in principle of doing this.
To refer back to our original analogy whereby epiphenomenalism was described by the illustration of a person jumping up and down in front of a mirror, then: it would be as if the mirror our brains were jumping up and down in front of were shielded inside of a black hole in a hidden dimension we couldn’t see. Our real bodies [by analogy, our physical brains] would never be able to see anything happening inside that mirror. And therefore, they would never be able to think about it or talk about it. And therefore, we would never see our reflections [by analogy, our consciously experienced minds] thinking or talking about the existence of reflections, because our reflections could only do that if our real bodies were doing that, and there would be absolutely no way in principle that our real bodies ever could.
The fact that we do this, then—the fact that we do think about consciousness as such, and the fact that we write volumes and volumes and volumes and volumes philosophizing about it, and the very fact that we produce theories (including epiphenomenalism itself) about its relation to the physical world in the first place—proves absolutely that whatever the mechanism may be, conscious experiences somehow most absolutely do in fact have causal influence over the world. What we have here is a rare example of a refutation that proceeds solely from the premises of the position itself, and demonstrates an internal inconsistency.
But Jaegwon Kim has already identified all the possible options for us! Either experiences and physical events are just literally identical (which even Kim himself rejects, for good reasons we have outlined here), or else epiphenomenalism is true (which Jaegwon Kim accepts, but which the simple argument outlined just now renders completely inadmissible)—or else the postulate of the causal closure of the physical domain is false—and conscious experience is both irreducible to and incapable of being explained in terms of blind physical mechanisms, and possesses unique causal efficacy over reality all in its own right.
What goes for the failure of epiphenomenalism about qualia goes just the same for epiphenomenalism about beliefs. It’s not just that epiphenomenalism would necessarily have to remove any causal role from the belief as such out of the picture; it’s that on any assumption of any world that worked that way, it would be impossible on principle for any of its inhabitants to ever form the very belief that their consciously held beliefs are outside of the causal nexus of the physical world—because all of the causally potent material brain events that squirt out these causally impotent consciously experienced “beliefs” would be happening inside of the causal nexus that consciously held beliefs, per se, can never in principle causally interact with because they are locked in principle outside of that nexus. Thus, we could never have any consciously experienced beliefs about our consciously experienced beliefs (or about their relationship to the rest of reality) at all. But the very concept of epiphenomenalism is exactly just such a belief—which proves that our beliefs do have causal impacts on reality.
But since the physicalist approach of denying their existence utterly fails, and since the physicalist approach of calling them “identical to” the blind causal dispositions of some assembly of neurons also fails, there is no option left which is both (1) internally consistent, (2) accounts for all of the facts that any valid theory must account for, and (3) remains “physicalist” in any meaningful sense. The only way the physicalist can give causal efficacy to our consciously experienced beliefs is to say that they literally just are a certain set of brain events. But, as physicalists themselves (like Rosenberg) acknowledge, this would mean we have to eliminate from the picture everything that makes our thoughts and experiences what they actually are. And that is why some physicalists end up desperate enough to turn to a theory as blind and idiotic as eliminativism: eliminativism is, in fact, the end conclusion of the physicalist premises.
But it is also blatantly absurd. And not absurd like “Hey, did you know the ground beneath you is actually spinning through space really fast even though it feels solid and motionless and stable?”
Absurd like “Hey, did you know that colorless green ideas sleep furiously? This is not a sentence. You are not reading this. In fact, nobody ever reads anything at all.”
Hence, the very fact that our beliefs about free will and determinism—no matter what they are—have the capacity to impact our behavior actually turns out to be an inescapable refutation of the very physicalism which underlies the claim that determinism is the only option because free will isn’t possible within a physicalist universe (as, indeed, it wouldn’t be, if physicalism were true). And that leaves us with all the weight of direct subjective experience itself in favor of human possession of free will on the one side, and nothing on the other.
_______ ~.::[༒]::.~ _______
My concluding comments will require a little more allowance of liberty from the reader than usual, as I will turn now from making logical arguments to explaining something about my own personal view—and so the standard to which my reasoning should be held from here is no longer “can I prove it?” but “does this internally hold together?”
I have argued elsewhere on this blog for the relevance of biological factors in predicting human behavior (for example, near the ending of this essay on the relationship between poverty, race, out–of–wedlock birth, and crime). Doesn’t that leave me with some explaining to do? How can there be free will and proof of genetic influence?
Actually, my view is the only one that can account for the meaningfulness of an idea like the insanity defense. Why is it that “insanity” should reduce a person’s punishment for a crime? What possible rationale is there for that?
In his own attempt to defend this principle, Sam Harris writes:
What does it really mean to take responsibility for an action? For instance, yesterday I went to the market; as it turns out, I was fully clothed, did not steal anything, and did not buy anchovies. To say that I was responsible for my behavior is simply to say that what I did was sufficiently in keeping with my thoughts, intentions, beliefs, and desires to be considered an extension of them. If, on the other hand, I had found myself standing in the market naked, intent upon stealing as many tins of anchovies as I could carry, this behavior would be totally out of character; I would feel that I was not in my right mind, or that I was otherwise not responsible for my actions.
I think most people would say that Harris is just plain wrong about whether the mere fact that behavior is “out of character” means that we do, or even should, judge that a person is therefore “not responsible for (their) actions.” The first time anyone commits a violent act of rape or murder, for example, their behavior is by definition “out of character”. Yet, this fact alone most certainly does not cause us to morally excuse all first–time offenders—nor should it.
The implicit idea behind the insanity defense is that there are some conditions in which a person has less control over their impulses than others, and is therefore less morally culpable for their actions. But if determinism were true, then the insanity defense would make no sense, because none of us would ever have any “control” over any of our impulses. Thus, all of us would qualify in the relevant sense as “insane”, all of the time—and the concept would never add any particular new meaning to any particular case; nothing would ever make this extra true in some peculiar circumstance, because it would already be as true as it can ever be, for everyone, all of the time. Hence, only if free will does exist can we contemplate situations in which it could be overridden, or reduced by varying degrees. “My brain made me do it” cannot be an exculpatory claim for the determinist—but it can for the believer in free will (if and when other facts support it).
In any case, my own view of free will in the relationship between the mind and the brain—simplified—goes something like this:
• (A) The conscious mind has the metaphysical capacity to choose between, and to inhibit, brain–based impulses (but exercising this capacity requires expenditure of a certain kind of probably limited “energy”).
• (B) Most of the time, the conscious mind is “in the driver’s seat”—but there are probably some unique circumstances in which it actually can get thrown out of that seat, thus rendering the driver proportionally less morally responsible for where the car ends up going in such unique cases.
• (C) Our biology essentially determines the impulses which we experience, and then possess the capacity to choose between, in the first place.
• (D) Empirical science has revealed that genetics plays a substantial role, far larger than most environmental inputs, in hardwiring the biology which in turn determines those impulses.
• (E) As a contingent fact, it is true that people usually decide to act on their impulses. But those impulses do not absolutely determine their ensuing actions most of the time.
The picture we get is one where the conscious mind is highly analogous to the “driver” of a vehicle, yes—but the vehicle is more like a boat than a car, and the fact that someone is holding the wheel doesn’t mean he possesses the power to drive the boat absolutely anywhere, at any time, without external constraints. On the contrary, whether the driver or the waves of the ocean are more influential in determining where the boat will go at any given point in time depends on various weather conditions and other circumstances which, themselves, are outside of the driver’s absolute control.
But barring more severe kinds of circumstances, someone who drives the boat well could thereby navigate to a part of the ocean where the waves will exert relatively less influence, and his driving skills therefore relatively more influence, over where he goes next.
And it has been increasingly validated by empirical science that belief in free will can help us to drive better—to the point that implicitly prompting someone to disbelieve in free will is even known to lower their reaction time. On the assumption that determinism is true, how is the determinist supposed to explain this? The proponent of free will can explain it easily: reminding someone that they have free will can prompt them to use it more, in just the same way that someone who has given up trying to drive a boat they can’t seem to maintain control of can benefit from a motivational speech reminding them of the fact that they can still get out of the storm that they’re in if they grab back onto the wheel and keep focusing their attention—because there is in fact a “driver” there who either may exercise that capacity, or may not.
And this is true even if at other times the implication that their driving was solely responsible for getting them into the storm in the first place can be further frustrating to them, when that implication happens to be false. But the problem in those cases is that it wasn’t the case—not that it couldn’t have been, or never is at all. Indeed, a neuroscientist who happens to be a dualist has had more success treating OCD than anyone so far operating under a materialist paradigm through methods that ask them to practice focusing their subjective mental attention as a means of ultimately rewiring the impulses which they experience—and while the materialist will of course simply hand–wave this away because changes in subjective conscious attention are to them just “chunks of matter” being rearranged anyway, it remains the case that were that so, it would be impossible in principle for consciously experienced events as such to have any sort of independent causal potency over physical brain events altogether.
In sum: The scientific studies from Benjamin Libet and those who followed his footsteps do nothing to refute the possibility of metaphysical free will. If the determinist wants to argue that determinism has any sort of social or psychological benefit, he’s going to have to deal with the problem that no version of physicalism seems to be able to account for the possibility that beliefs, as such could have independent causal efficacy of their own over the physical states of our brains in the first place (without running into other, absolutely insurmountable problems that have been detailed elsewhere throughout this series). But it turns out that research is coming to establish that belief in free will has far more benefits than belief in determinism, anyway—and the idea that we should tell people that free will is impossible, or false, while telling them that they should believe in it anyway is an obvious dead end. It may “only” be the evidence of direct subjective experience that stands in favor of the existence of free will—but nothing solid stands against it.
_______ ~.::[༒]::.~ _______
 In the Harris excerpt I read, a mention of the Soon studies followed the break after this paragraph. He may have been referring to the studies of Haggard and Eimer in this part which preceded the break, but in any case, Soon’s is one of the most recent modern “replications” of this kind of finding.