Goldilocks and the Three Barefoot Shoes

I’ve been wearing barefoot shoes for years now. I believe I’ve found the ultimate pair to recommend to anyone who wants my advice on diving in, so I’m writing my first-ever product review. Read on to see me explain:

  • Why YOU should be wearing barefoot shoes,
  • Why some barefoot shoes are better than others,
  • And why I think these are the best!


Click the image to head to the manufacturer’s website!

Table of contents:

How do they work?
Why should I wear barefoot shoes?
Why they have to actually feel barefoot
Why GoSt-Barefoot Paleos®?
Do they make any noise?
How do they handle the heat?
What does the texture feel like?
How do they fit?
The Sizing Process
The Look
An Unexpected Use

How do they work?

GoSt-Barefoot offers several different shoes which incorporate chainmail into the design. The “URBAN” series includes a slipper, a huarache, a sports shoe, a casual wear shoe, a trail running shoe, and a shoe made for soggy terrain. All of these include fitted outsoles. I’m after the most barefoot experience possible, so there weren’t my thing.

The “CLASSIC” and “ULTRA” series look similar to what you’ll see displayed in photos on this review. The main difference between these models is the pattern of the lacing through the top of the shoe. From the Ultra series, the Delinda is advertised for casual use, the Anterra for hiking, and the Pronativ for running on nature trails. The Classic lacing runs all the way up to the toes, to ensure you can get a good fit no matter the shape of your feet.

I chose the Classic.


After you choose a Classic or Ultra shoe, you have to decide whether you want “paws.” The paws are sown in a cobblestone pattern across the bottom of the shoe. They’re based on the concept of an animal’s paw. When each ball presses down, it expands its surface area across the ground slightly, to provide a solid grip. And there is space between them. So loose materials like mud and sand just slip right through while the paws seek traction. If you plan to wear these on slick or polished surfaces, you’ll have to include the paws or you’ll slip everywhere.


There are three options for the paws. They’re called ‘Water,’ ‘Soil,’ and ‘Cliff.’ ‘Water’ is soft, with a shore durability rating of 65A. That’s about the same durability as the material used in a pink eraser. They also add about 2mm to the final thickness of the shoe. ‘Soil’ bumps the durability rating up to 78A, the same as the wheels of a roller skate or skateboard. It also adds another half a millimeter to the thickness. Finally, ‘Cliff’ adds another half a millimeter and boosts the durability rating to a whopping 90A. That’s about the same durability as a tire tread.

I went with the ‘Soil’ paws for mine.

The chainmail varies from 1.1 to 1.6mm thick, depending on how far you’re stretching the rings. The website claims it averages out to 1.4mm. If you only ever plan to use these on soft surfaces like sand or soft ground, you could leave off the paws. This would mean far more durability and far more natural ground feel than you could get from any other shoe. Ever. They would enhance your grip, and they’d protect you from landing on things like nails and burs.

The ‘soil’ paws add 2.5mm to the chainmail’s average 1.4mm. That’s a grand total of 3.9mm between my feet and the ground. Check out an actual size online ruler if it hasn’t sunk in just how thin that is.

If the paws wear out, you can send them back to GoSt to get a deep cleaning and a paw replacement. At about $100, this is less than the price of replacing most worn out barefoot shoes. If you somehow manage to wear away the paws and the chainmail before sending them in, there is another option to replace the entire sole. That one’s about $150.

And I have no doubt in my mind that these will last far longer than the average barefoot shoe.


Why should I wear barefoot shoes?

Because you shouldn’t be wearing any others. Ever, if you can help it. The shortest way to explain the problem? You know why it’s a bad idea for women to wear high heels. So you already know why we should all be avoiding shoes.

High heels shorten your calves,  and the effect can be permanent. They stiffen the Achilles tendon, which connects the calf muscle to the bone. This makes it impossible to even try to stretch the calf.  As a result, women who wear high heels experience pain when they aren’t wearing them. And the muscles can become so short that walking in anything else will be painful for the rest of your life. They cause you to walk in an unnatural gait, while adding pressure to the knees. This fast-tracks you towards arthritis. They force you to walk with your pelvis tilted forward in order to remain balanced. This causes lower back problems. You might also roll your shoulders forward to compensate for the curve in your lower spine. At that point, you are throwing your entire skeleton out of whack.

But there’s nothing magic about “high heels.” High heels cause these issues because: 1. they elevate the heel relative to the rest of the foot, and 2. they cut off your ability to feel the ground. That’s it. And guess what?

Your shoes do both of these things, too. Yes, if your heel isn’t as elevated, and you aren’t as cut off from the ground, then the issues don’t form as fast. But the farther you get from being barefoot, the faster they do. Your feet should be flat from heel to toe. You should be able to feel how much impact you’re striking the ground with.

If you’re interested in learning more, just follow Kelly Starrett.

This man is the world expert on everything related to human mobility. Starrett has done mobility training with everyone from professional ballerinas to elite members of the U.S. military. You can find videos online of him explaining why you should never wear sandals. His book Ready to Run is the best explanation of the ‘why’ and ‘how to’ of barefoot running around. Search Google for “Kelly Starrett” and you’ll find an endless wealth of information.

If you’d like a shorter article to read, there’s an excellent one in New York Magazine here:

[A study] examined 180 modern humans from three different population groups (Sotho, Zulu, and European), comparing their feet to one another’s, as well as to the feet of 2,000-year-old skeletons. The researchers concluded that, prior to the invention of shoes, people had healthier feet. Among the modern subjects, the Zulu population, which often goes barefoot, had the healthiest feet while the Europeans—i.e., the habitual shoe-wearers—had the unhealthiest. One of the lead researchers, Dr. Bernhard Zipfel, when commenting on his findings, lamented that the American Podiatric Medical Association does not “actively encourage outdoor barefoot walking for healthy individuals. This flies in the face of the increasing scientific evidence, including our study, that most of the commercially available footwear is not good for the feet.”


Why your “barefoot shoes” need to actually feel barefoot

Vibram has been challenged in court over its claim that barefoot running is less likely to result in injury. Turns out, a lot of people were getting more injuries wearing Vibrams. Vibram settled out of court for a whopping $3.75 million.

What’s going on here? Why am I still interested in barefoot shoes?

One of the problems is that many of Vibram’s shoes weren’t really “barefoot.”

If you have enough padding on your shoes that you can’t feel the ground, you’re going to run in the unnatural way that any other shoe would cause you to. Yes, even if they have slots for your toes. And if you’re going to run unnaturally, then it may be better to do so with extra cushioning than to do so with less.

If you can’t feel the ground very well, then you’re going to be more likely to strike the ground hard with your heels. This sends shockwaves through your bones and joints, instead of the muscles designed to absorb them when barefoot. This will especially wear out your knees over time. Many so-called “barefoot shoes” have soles as thick as 14mm.

This is enough to reduce the amount of padding cushioning the impact to your joints. But it’s not enough to cause you to change your running form and send that impact into your muscles instead of your joints. So you keep running the same way, and you end up doing more damage and having more injuries.

This is why it’s important to find a barefoot shoe that actually feels barefoot.

You don’t want a traditional running shoe that gives you less cushioning.

You want a barefoot shoe. 

As I searched for the most barefoot possible barefoot shoe, I found that there was something very natural about being able to physically feel my environment through my feet. The more I got used to this, the harder I found it to tolerate wearing thicker shoes even for short periods of time. They began to make me feel like I was wearing thick winter gloves all the time that I wasn’t allowed to take off.

Or like Bubble Boy—isolated from the world by an artificial shield. This grew to the point where I couldn’t stand it anymore. My feet felt just like my hands—a legitimate part of my body that didn’t deserve to be locked up, withering away in a cast. As I adapted to barefoot shoes, it really did feel this bad. Several Vibrams I tried felt this way too.



Why the GoSt-Barefoot Paleos®?

If you know me, I know what you’ll be thinking.

To recap, this is me:


Of course the guy who spent two years growing out his hair just to rip off Vikings would want chainmail shoes. Clearly it’s an elaborate fashion stunt, right? But I can promise you, the aesthetic was only an added bonus.

Here are the top perks that you won’t find in any other shoe:

  1. There is never an issue walking in rain or water.
  2. The shoes will never smell, or cause your feet to smell.
  3. If you want to take them off, you can slide them into a pocket.
    ^ This is especially useful for travelers and one-baggers.
  4. Nowhere else can you find this balance between barefoot ‘feel’ and durability.
  5. They’re excellent when it’s hot out (see below).

My previous favorite barefoot shoe, the Vibram KSO Treks, were awful when it came to wet weather. Your foot is close to the ground when you’re wearing a barefoot shoe. The second you step in even the most shallow puddle, your foot is being soaked. The KSO Treks would soak up water like a sponge and then sog like a wet mat if they ever got caught in a light rain. This wasn’t pleasant. Especially when I got caught in the rain while vagabonding alone through New York.

With the Paleos®, stepping through water is one of the best parts. Nothing is going to be swimming around on my bare foot, and I’ll be dry within minutes. I won’t lose traction and slip because my foot is wet, either.


But point #4 is what sold me on these shoes. The ideal barefoot shoe should feel as close to being truly barefoot as possible, but without being so flimsy that I’m afraid of actually using it.

Which finally brings me to the story of Goldilocks and the Three Barefoot Shoes. 

In my search for the most barefoot shoe possible, I found two ultra-thin options providing coverage to the top of the foot. Vibram’s thinnest model, the El-X, features a 2.7mm max sole. The only way to go below this was to order from Feelmax. Feelmax uses a special patented German rubber to achieve a 1mm sole.

However, the 1mm models from Feelmax are sold exclusively for indoor use. There’s no trick on Earth that can make a 1mm sole durable out in the elements. And wearing Vibram’s El-X feels like slipping on a latex glove. Yes, it’s as close to not wearing anything as you’re going to get, but you can also feel that this isn’t going to last long. You could probably stab the shoes with a paperclip and they’d be done for.

On the other hand, the Vibram Bikila clocks in at 8.5mm. After trying out the thinner shoes, they felt like walking with tatami mat strapped to my foot (like you’d find on the floor of a martial arts studio).

The optimal compromise I came to at the end of this search was Vibram’s KSO Trek, with a max sole of 4.7mm. These felt durable, but they also offered plenty of “barefoot feel” at the same time. So this is where I settled my search a year ago. Vibram has discontinued them, however. I snatch them up used when I can.

Amazingly, the GoSt Paleos® feel every bit as durable as the KSO Treks despite being almost a millimeter thinner. They feel significantly more durable than any shoe made thinner than the KSO Treks. And with the entire shoe being open, the increase in barefoot “feel” brought by the Paleos® over the KSO Treks is far more than a millimeter. It is a remarkable achievement to create a shoe that feels this close to being truly barefoot that won’t make you terrified of beating them up. I’ve yet to find anything else that achieves this.


Do they make any noise?

Not really. Here’s an audio recording of me holding them up close to a microphone and shaking them as hard as I can. Similar to, but barely a fraction as loud as, the sound of someone jingling change. And they don’t even get as loud as this when running. Not even during all-out heavy sprints. I have to shake them in my hand to get this sound.



How do they handle the heat?

Before getting these, I spent a couple hours browsing comment sections to see what concerns came to peoples’ minds. One of the most common worries was that they would absorb the heat from the sun and then burn you.

This couldn’t be farther from the truth. I live in the mountains in Georgia where summers hit 90°F, and I can honestly say that these are cooler than sandals. Why? Picture going shirtless on a hot day. Now imagine wearing a loose white shirt that blocks the sun while allowing you to feel the breeze. The shirt makes you cooler, right?

First, the silver color works even better than a white shirt at reflecting the light from the sun away from you. A white shirt gets brighter in the sun. These glow. And that is a visual representation of how much heat they are deflecting off of you; the light of the sun carries much of its heat. Second, unlike any other footwear on Earth, the bottom of your foot can breathe. The air hits 360° around your entire foot. You actually feel the breeze on your soles.

Military and sports research has studied the best way to stay cool in hot weather. Is it better to block your exposure to the sun at the cost of less ventilation of your skin? Or is it better to strip down to increase airflow at the cost of increased sun exposure? The answer is complicated. But there are circumstances where blocking the sun is preferable; Bedouins in the hot deserts of the Middle East cover their heads with those thick scarves for a reason.

Either way, wearing loose, breathable fabrics gets you the best of both worlds. They allow your sweat to evaporate in the breeze so it can cool you off, and they block the sun.

One of my favorite pants of all time is from Vertx, the Phantom Ops with Airflow. It’s designed like a full pant—with several sections of fabric replaced with mesh. By blocking the sun while still allowing your sweat to evaporate in the air, they make you feel even cooler than shorts. And the GoSt Paleos® achieve the same effect in a shoe.

Here I am wearing both. Notice how similar the texture of the mesh and the chainmail is:


After wearing them normally, I tried standing still on hot pavement with my feet directly exposed to the searing noon sun. After several minutes straight without moving, they finally started to get warm and tingly (but never painful). They also cooled back off within moments as soon as I moved them. When you walk, they’ll be moving in and out of the shadow cast by your body. This prevents them from building up warmth through continuous contact with sunlight. If for some reason you had to stand still for hours in direct sunlight, the saver socks would solve even that issue.

Ironically, they do a much better job absorbing cold than heat. I didn’t notice this until I walked through my grocery store’s frozen produce section. As soon as the air from the freezer touched my feet, it was like the shoes absorbed every bit of it and my foot was suddenly surrounded by rings of cooling gel. For what it’s worth, I found the sensation enjoyable.  It’s also something you won’t experience unless you’re walking through your frozen produce section in chainmail. Keep in mind you wouldn’t be wearing these when it’s cold out anyway—especially not against bare skin.

And you can’t mark that against the shoes. There just isn’t a way to design a shoe thick enough to give you warm insulation and also thin enough to count as a true “barefoot” shoe. But personally? My feet are very tolerant to the cold, and I often make the mile run to my mailbox in winter while barefoot.  So I still look forward to wearing these in winter, anyway. The only time I definitely would not consider it is when it snows.


What does the texture feel like?

I don’t particularly enjoy soft things, but I’m ridiculously sensitive to textures. If my wife uses a nail file while I’m touching her arm, then I can feel it through her arm and I can’t stand it. I have to stop touching her. If anyone would have an issue with an irritating texture, it would be me. And I really enjoy the texture of chainmail.

Chainmail feels smooth because of the way the rings glide across each other. The inner diameter of each ring is 2.9mm, and each ring has four other rings looped through it. That means each ring can slide about half a millimeter through any of the other rings it’s connected to. And since the shape of each ring is circular, this sliding sensation is very smooth. A chainmail keychain attachment came included with my order, and if I rub it between my fingers, it feels like a tiny massage. I find myself doing this instead of jingling loose change or playing with my keys.


So unless you’re already picky about making sure everything you wear is soft, I can’t imagine it causing a problem. The shoe is shifting and moving the entire time it’s wrapped around your foot. I feel like Moses walking through the Red Sea, feeling the water part out of my way every time I come close to it. There’s nothing else quite like it.

To make the texture of chainmail feel rough, you have to stretch it tight. When there’s no ‘give’ left at all and the rings are unable to slide against each other, then it loses its smoothness. I can make this happen with the chainmail on the keychain piece by pulling it tight at both ends. But this never happens while wearing the shoes.

Compare the above photo to this one:


On your foot, the chainmail always rests more like the first photo than the second one.


How do they fit?

Several barefoot aficionados have worried that they look narrow around the toes. As mentioned in the last section, chainmail slips and slides over itself the whole time you’re wearing it. Trust me: if these look narrow, it’s because of the shape of the wearer’s foot. If I spread my toes, then the chainmail will expand to accommodate them instantly.

Just take a look at how flexible they are. I don’t even feel the shoes moving as they shift into this position:


The laces loop all the way around to the back of the shoe, where they connect to a hem cinch. This puts you in total control of how loose or tight the entire shoe is.


The cord is long, to make sure you can get it loose enough to remove. So when you get it as tight as you want, you slide the end of the cord between the hem cinch and the shoe to create a loop. This takes care of the excess length.


I would definitely recommend the Classics to anyone who has trouble fitting into shoes. In fact, I would recommend them on this basis alone. Unless you have talons coming out of your heel, these are going to fit you.

When I first put them on, I was trying to make them fit snug around my entire foot. This was causing the chainmail around my ankle where the material cinches to dig into my foot. I solved the issue by putting on the included bottomless neoprene socks so that my ankle would be cushioned. It worked perfectly.

But when I told Jörg Peitzker (CEO) about it, he was disappointed. Apparently, they’re supposed to be kept loose enough to shift around. So I made myself get used to it. And as you can see, I’m comfortably running and sprinting and climbing trees without the socks in all of the pictures included here. It takes a little practice to figure out exactly how to cinch it to get the right fit. But once you do, it’s way faster than tying shoelaces, and much more secure.


The Sizing Process

GoSt-Barefoot has some of the best customer service I’ve ever experienced. Several times during my conversations with customer service asking for details about the shoes, Jörg  popped in to respond to me personally.

When purchasing the shoes, GoSt asks you if you’d like them to double check your size for you. If you say yes, then customer service will contact you with instructions. They’ll ask you to let them know what sizes you wear in your other shoes. And they’ll have you trace the outline of your feet on a sheet of paper. Then they’ll adjust the size and model of shoe they send to you. You’ll get a notification of the change well before they’re shipped out.


The Look

Did you see the Disney movie Moana? You’ll remember Jemaine Clement’s performance as Tamatoa, the treasure-hoarding crab. The song Shiny was worth sitting through the rest of the movie for. Play it. Now. It came to my mind when I wore these out in public. I did not expect these to be so shiny. Under indoor lighting, they look like what I expect chainmail to look like. But in the right angle under the sun, they glow. This is my new theme song.

Well, Tamatoa hasn’t always been this glam
I was a drab little crab once
Now I know I can be happy as a clam
Because I’m beautiful, baby

Did your granny say, listen to your heart
Be who you are on the inside?
I need three words to tear her argument apart:
Your granny lied!
I’d rather be


I was worried they would come off like a gimmick at first. Now I think of them like a durable leather jacket. Is it a fashion statement? Sure—but it’s also such a well-designed product that you could consider buying it even if it wasn’t. If more fashion statements came with this level of value… I wouldn’t have a problem with “fashion statements.”

Wearing Vibram toe-shoes out in public gets a mixed reception.

Some people are fascinated; I’ve even had people insist on taking pictures of me. Others ask questions like “Are those comfortable?” in a tone that indicates skepticism. So far, I’ve yet to get that kind of skepticism wearing these. Rather than asking, “Are those comfortable?” I’ve already had several people exclaim “Those look comfortable!”

So in my experiences so far, these seem to make a more appealing advertisement for barefoot running than Vibrams. I consider that a perk in and of itself. As many people should be running barefoot and wearing barefoot shoes as possible! If wearing these can make people interested in hearing me rant about why, then that’s awesome.



An Unexpected Use

I went out weed eating in them, and they completely prevented grass or dirt or twigs from getting on the top or sides of my feet. Nothing reached my foot. Everything landed on the shoes and then fell right off of the chainmail as it slinked around. Small rocks caused no real impact.

My yard has a steep hill I need to stand on for half the time I spend weed eating. The grip with Paleos while standing on the wet, steep angled hill was excellent. I had a few sprigs of wet grass, and plenty of moisture, on the bottom of my foot. I did want to cinch them up tighter on my ankle so they wouldn’t slink around on the slanted hill. But even leaving them loose, the grip felt more secure than it is with my Vibrams.

The only uncomfortable point was trying to climb the steep hill. Stretching my foot this far does stretch the material to the point that there is no ‘give’ left between the links. This makes the material feel quite rough against my stretched Achilles. This could be solved with the saver socks, of course. I plan to continue wearing these for weed eating, but avoid climbing the steep hill. I wouldn’t recommend these without socks for rock climbing. But they are awesome for weed eating: my foot stays protected, the grip is superb, and I can still sink my bare feet right into the fresh grass.

By the way, when you hear people talking about “grounding” or “Earthing” by walking barefoot, this isn’t just hippie talk. There is real scientific evidence showing that barefoot contact with the Earth reduces inflammation, speeds healing of wounds, improves blood flow, reduces blood clotting, normalizes EEG activity, and more. It can even improve stress tolerance and reduce death rates in premature infants in intensive care units.



I was given a discounted product in exchange for my review. It didn’t cross my mind to pitch for this until I was halfway into a conversation with customer service, at which point I suggested it. I pointed out that I could introduce them to anywhere from 5,000 to 20,000 people who’d otherwise never hear about them because they don’t wear barefoot shoes. Jörg responded personally to tell me that my inquiry sounded “honest,” and we had a deal.

I plan to continue writing product reviews in the future. However, I’m only going to review a small number of my favorite things I’d rate 5 stars. Some of these reviews will be compensated in some way, while others won’t. I’ll be writing these for the same reason I write anything else: as a reference point for conversation. I don’t want to have to rehash all this every time it comes up. It’s much easier to just send someone a link for things I talk about often!


Tracking Refugees vs. Rapes in Sweden

Feel free to share this wherever you’d like:


Introducing the Zombie Meditations Reading List (pt.1)

The reading list has been up now for a week or two, and I figured it would be worthwhile to provide a synopsis of each work and explain how each fits into what might loosely be considered the ‘ZM worldview’. This actually might be one of the most efficient methods of making it clear just what “the ZM worldview” actually is. 

The first three books, The Way of Men and Becoming a Barbarian from Jack Donovan sandwiched around Tribe by Sebastian Junger, all deal with the theme of tribalism—more specifically, with the idea that individualism has actually made modern life more anxious, more depressive, and more abrasive by corroding the tribal bonds that we’re wired for.

. In his own words, Sebastian Junger made the decision to write Tribe after:

“I was at a remote outpost called Restrepo. … It was a 20-man position, everyone sleeping basically shoulder to shoulder in the dirt at first and then in these little hooches.

And it was very intimate, very close, very connected, emotionally connected experience. And after the deployment, which was — the deployment was hellish. And afterward the deployment, a lot of those guys missed the combat and they didn’t want to come home to America.

What is it about modern society that’s so repellent even to people that are from there? And my book “Tribe” is an attempt to answer that question.” [1]

If you resonated with the themes of Fight Club, you’ll appreciate these books for taking a more prolonged and serious look at them. As Chuck Palahniuk put it:

“We’re the middle children of history, man. No purpose or place. We have no Great War. No Great Depression. Our Great War’s a spiritual war … our Great Depression is our lives.” – Fight Club

Sebastian Junger is a rather more mainstream figure than Jack Donovan (more than one of his books became international best sellers, and he’s been Oscar–nominated for his work as a documentarian), and his book adds several historical and modern case examples to the picture. For example, he starts off with a discussion of the hundreds of early American settlers captured by American Indians who, upon release, preferred to join American Indian tribes rather than rejoin their own societies—and points out that even after the settlers resorted to instituting harsh punishments to deter settlers from running away in the early 1600s, hundreds continued to do so regardless.

“ … modern society is a miracle in a lot of ways, right? But as affluence goes up in a society, the suicide rate tends to go up, not down. As affluence goes up in a society, the depression rate goes up. When a crisis hits, then people’s psychological health starts to improve. After 9/11 in New York City — I live in New York — and after 9/11, the suicide rate went down in New York, not up. It went down. It improved. Violent crime went down. Murder went down. There was a sense that everyone needs each other.” [1]

Junger starts growing lazy right around the point where he starts discussing solutions, however. On the one hand, he says that the Internet “is doubly dangerous for all — and, again, for all of its miraculous capacity, not only does it not provide real community and real human connection. It gives you the illusion that it does, right? … what you need is to feel people, smell them, hear them, feel them around you. I mean, that’s the human connection that we evolved for, for hundreds of thousands of years. The Internet doesn’t provide that.” He also admits that “humans lose the ability to connect emotionally with people after a certain number, right at 150. So there is a limit to the number of people we can connect with and that we can feel capable of sacrificing ourselves for if need be.”

Yet, when he goes on to suggest actual solutions, he sounds like this: “I think the trick — and this country is in a very, very tricky place socially, economically, politically — I think the trick, if you want to be a functioning country, a nation, a viable nation, you have to define tribe to include the entire country, even people you disagree with.”

But talk to any member of the Armed Forces, and if the conversation goes on for long enough, nearly without exception they’ll make it very clear that the “Tribe” they were really inspired to fight for wasn’t ‘America,’ it was the men standing right next to them. And it’s beyond obvious that a country is made up of more than Junger’s limit of 150 people. What Junger proposes is every bit as distant from our tribal evolutionary nature as modern society is, and if we could just define that core nature out of being a problem so easily, we wouldn’t be in this mess in the first place.

That’s where Jack Donovan comes in with a far more rigorous discussion of what’s needed to replace what we have now. In an essay titled Tribalism is Not Humanitarianism in which he takes Junger to task directly, Donovan writes:

Junger thinks that while life in the modern West is safe and comfortable, some of the reasons returning soldiers find themselves yearning for the war, and why even civilians who have lived through conflict or disaster find themselves remembering their ordeals, “more fondly than weddings or tropical vacations,” may be because they miss being in a situation where their actions truly mattered, and people helped each other out. He writes:

“Humans don’t mind hardship, in fact they thrive on it; what they mind is not feeling necessary. Modern society has perfected the art of making people not feel necessary. It’s time for that to end.”

This is absolutely true, and I’ve written basically the same thing about the general ennui of men in modern post-feminist nations — where their natural roles are performed by corporations and government institutions, and they are reduced to mere sources of income and insemination.

Junger says people in modern society are missing a sense of tribe, and he’s right, but he stops short of either truly understanding or being willing to address the totality of what it means to be part of a tribe. The elements of tribalism that make it fundamentally incompatible with pluralism and globalism go unmentioned or unexamined in Tribe, and what remains is a handful of vague, disappointing bromides about not cheating and treating political opponents more fairly and helping each other so we don’t “lose our humanity.”

… Tribalism is defined as, “strong loyalty to one’s own tribe, party, or group.” Tribal belonging is exclusive and the camaraderie and generosity that Junger admires in tribal people is functional because the group has defined boundaries [my emphasis]. Tribal people are generally not wandering do-gooders. Tribal people help each other because helping each other means helping “us,” and they know who their “us” is. In a well-defined tribal group where people know or at least recognize each other as members, people who share voluntarily are socially rewarded and people who do not are punished or removed from the group. Reciprocity is also relatively immediate and recognizable. There is a return for showing that you are “on the team.”

Junger writes that, “It makes absolutely no sense to make sacrifices for a group that, itself, isn’t willing to make sacrifices for you.”

He recognizes that American tribal identity, to the extent that it ever existed, is collapsing internally when he warns that, “People who speak with contempt for one another will probably not remain united for long,” and remarks, “…the ultimate terrorist strategy would be to just leave the country alone.”

… The nationalistic pluralism of the early United States was a project started by men with similar religious beliefs (however ready they were to die over the details). Those men also shared a similar European cultural and racial heritage. They all looked alike, and while they came from different regions, each with its own quirks and sometimes even a different language, they were all the genetic and cultural heirs to a broader Western heritage that stretched back to the Classical period.

American pluralism was initially a pluralism within limits, but those limits were either so poorly defined or relied so heavily on implied assumptions that membership was progressively opened to include anyone from anywhere, of any race, of any sex, who believes absolutely anything.

… People in Western nations haven’t “somehow lost” the kinds of ancestral narratives and common cultures that unite people and encourage them to look out for each other and stick together. Those cultures and narratives have been systematically undermined by the institutions of Western governments, in favor of a “multicultural” approach that better serves the interests of globally oriented corporations.

That brings me to another book on the list: Ricardo Duchesne’s The Uniqueness of Western Civilization. Uniqueness examines the systematic undermining of these American cultural–historical narratives by academia in rigorous detail, and exposes the flaws in the alternative, revisionist, “multiculturalist” approach that has taken its place. Advocates of the latter camp—who predominate in American universities, and are celebrated in the national media—argue that the extraordinary wealth and power acquired by Western society came late, as a result of nothing other than luck, and will therefore necessarily only be temporary. Duchesne, in contrast, places heavy emphasis on the cultural and intellectual life of the populations the founders descended from—the Indo–European, horse–riding nomads of the Pontic–Caspian steppes, whose culture represented a uniquely aggressive combination of the libertarian and aristocratic spirits.

In April of 2016, the students of Stanford University voted 1,992 to 347 against an effort to restore the college requirement for courses in Western Civilization. In a hostile article in the Stanford Daily, a student writing under the name ‘Erika Lynn Abigail Persephone Joanna Kreeger’ complains that “Stanford is already a four-year academic exercise in Western Civilizations … In her first lecture of the macroeconomics section of Econ 1, the professor rhetorically asked why Africa — yes, Africa — was so poor, and answered … [by failing to] mention colonialism, occupation and capitalism as driving forces in the creation of poverty…,” concluding that “… a Western Civ requirement would necessitate that our education be centered on upholding white supremacy, capitalism and colonialism, and all other oppressive systems that flow from Western civilizations….” and suggesting instead that the University should require “courses that will make students question whether they should be the ones to go forward and make changes in the world…”

It would be interesting to see students with opinions like Ms. Kreeger’s take a look at the actual correlation between Western colonialism and African wealth. (See here.) The per capita GDP in Haiti—which achieved its independence from France in 1804—in 2013 was about $1700. Meanwhile, blacks in South Africa, the most colonized part of Africa by far, have a per capita GDP of about $5800. (See here.) There’s also really no correlation between the wealth of nations and how deeply those nations engaged in colonization. (See here and here.)  The prevalence of these self–deprecating condemnations of Western Civilization as bearing blame for — yes, literally — the entire world’s ills are exactly why a corrective like Duchesne’s is so necessary.

War! What is It Good For?: Conflict and the Progress of Civilization from Primates to Robots by Ian Morris, Ultrasociety: How 10,000 Years of War Made Humans the Greatest Cooperators on Earth by Peter Turchin, and A Country Made By War: From the Revolution to Vietnam—The Story of America’s Rise to Power by Geoffrey Perret all address the same theme through the specific lens of war.

War! What is it Good For? takes a very broad historical approach to the subject, starting all the way back in pre–history and showing that at every single step towards civilization, people have only ever banded together in larger groups (whether individuals into tribes, or tribes into states, or states into countries) because they had to do so in order to overpower or defend themselves against another, outside, larger group. Referencing Stephen Pinker’s research in The Better Angels of Our Nature which shows that the world is actually becoming less violent over time (you were much more likely to be clubbed on the head by a neighbor or invaded by a neighboring tribe in a tribal society than you are today under a state government to be killed by an opposing state in a war or murdered by a neighbor), he expresses his thesis in an aphorism: “War made the state, and the state made peace.” In other words, just as tribes suppress violence between tribesmen because they need to be internally cooperative in order to successfully defend themselves against outside tribes, so states suppress violence between their citizens because they need to be internally cooperative in order to successfully defend themselves against outside states.

To put it another way, the need for violence (in certain contexts) is actually the only reason human beings have ever historically tried to suppress violence (in certain other contexts). Morris also explains that the development of agriculture was as central to the evolution of Western society as it was because it raised the stakes of war: with the development of a system of food production which tethered people to fixed resource bases, suddenly war meant that you could capture territory—and that outsiders were interested in capturing yours. Thus, the need for agriculturalists to band together in groups to protect their collective properties became instrumental in the evolution of the political structure of the West.

Peter Turchin’s Ultrasociety argues exactly the same point, but with a different spin that makes it entirely worth reading both books. Discerning readers may have noticed that there seems to be a tension between the ideas discussed in the first part of this essay and the apparent implication of Ian Morris’ work that the evolution of tribal societies into states is inevitable. Peter Turchin brings this tension full–circle, in his own words:

“Here’s how war serves to weed out societies that “go bad.” When discipline, imposed by the need to survive conflict, gets relaxed, societies lose their ability to cooperate. A reactionary catchphrase of the 1970s used to go, “what this generation needs is a war,” a deplorable sentiment but one that in terms of cultural evolution might sometimes have a germ of cold logic.

At any rate, there is a pattern that we see recurring throughout history, when a successful empire expands its borders so far that it becomes the biggest kid on the block. When survival is no longer at stake, selfish elites and other special interest groups capture the political agenda. The spirit that “we are all in the same boat” disappears and is replaced by a “winner take all” mentality. …

Beyond a certain point a formerly great empire becomes so dysfunctional that smaller, more cohesive neighbors begin tearing it apart. Eventually the capacity for cooperation declines to such a low level that barbarians can strike at the very heart of the empire without encountering significant resistance. But barbarians at the gate are not the real cause of imperial collapse. They are a consequence of the failure to sustain social cooperation. As the British historian Arnold Toynbee said, great civilisations are not murdered – they die by suicide.”

Rounding out the “war” section of the “world history” section of the list, “A Country Made By War” is more a mytho–poetic narrative of America’s conflicts than an argument for a specific thesis—though it does emphasize the ways in which war has impacted daily civilian life: for example, it was World War I that established the trend of mens’ use of wristwatches and safety razors. From an entirely different angle, A Country Made By War helps round out an understanding of how deeply war impacts our daily lives.

Next under the heading of “history”: A Farewell to Alms, in which economic historian Gregory Clark takes a close look at one of the most significant turning points in human history (and of course, again, Western society): the Industrial Revolution. Why did it happen when it happened?

The conventional arguments have always focused on institutional economic conditions, with claims that markets became freer, property rights became more secure, and so on and so forth. But Clark shows that none of this is true—in fact, markets were even freer before the period of time in which the Industrial Revolution took place.

But what did happen is that “Thrift, prudence, negotiation and hard work [became] values for communities that… [had been] spendthrift, impulsive, violent and leisure loving.” In other words, the traits of the upper class spread to the lower classes. As a result, people began to save instead of spend—and that is why capital accumulation finally began to exert its exponential increases on human productivity.

However, this shift in values didn’t happen due to cultural transmission—it didn’t happen because the rich began a campaign of propaganda directed at the pooror because the poor just decided to start being more prudent. It happened because the offspring of the upper classes were replacing the offspring of the lower classes in the population.

What difference between Europe, India, and China explains why the Industrial Revolution happened in the former instead of either of the latter two? According to Clark’s extensive research, the cause was the fact that Europe was much more Darwinian—the upper classes weren’t replacing the lower classes anywhere near as quickly in either India or China over the same period of time.

Since we know that several traits relevant to the ability to delay gratification in pursuit of one’s goals and act with foresight—like conscientiousness, or the tendency to procrastinate, or impulsiveness—are all heavily influenced by genes, the conclusion we end up with is that we literally owe the greatest explosion of wealth and productivity that mankind has ever seen to what essentially amounts to a process of eugenics. 

Find the claim distasteful if you want, but the evidence Clark presents gives incredibly strong reason to believe that it is true. Clark’s thesis is harsh—it suggests that if not for what was essentially a lot of innocent people dying off, the First World wouldn’t have become the First World. But Clark’s case is overwhelming. His follow–up, The Son Also Rises: Surnames and the History of Social Mobility (not included in the list), examines two points: first, he shows just how little social mobility there actually is in any society on Earth—it’s much less than most of us would have expected. Second, he shows that the tendency of the descendants of a particular familial lineage to regress to a ‘mean’ of social standing peculiar to that family despite temporary dips and jumps in each generation lines up exactly with what we would predict on the assumption that genetic inheritance was responsible for the larger bulk of this phenomena. He even tracks the heritability of social status over the course of the Maoist revolution in China, or the transition from the leftist Allenda government in Chile to the right–wing Pinochet administration, and finds that none of these policies had any impact on it at all. That’s not exactly the most exciting track record for anyone who thinks they can make different groups of people become more equal through social engineering.

Consciousness (IX) — Freedom is a State of Mind (On Benjamin Libet and Sam Harris)

Bad philosophy can corrupt conclusions that are drawn seemingly straightforwardly out of scientific experiments. “Scientism” is one of those words that functions sort of like “cuck” or “RepubliKKKan” or “Christfag” in that using it often does more to signal allegiance to a group than it does to help progress conversations towards truth. In this essay, I want to give a very clear definition to the word “scientism”, followed by a very clear demonstration of a place where it does exist.

(Note: The theme of “scientism” was recently introduced in “Breaking (Down) Bad (Philosophy of Science)”).

“Scientism” is when someone: (1) conducts a scientific experiment, producing empirical results; (2) strains that empirical data through a lens of interpretation – a philosophical lens that requires philosophical defense or refutation – to produce what is ultimately more a philosophical claim than a purely scientific one; and then (3) pretends that this resulting claim involves no philosophy at all, and thus needs no philosophical defense, but has the full backing weight of the authority of Science Itself behind it, and is thus beyond any further argument. In this way, the fallacy of “scientism” allows philosophical premises to get smuggled in past security, and then pretends that the truth of those premises was thereby proven.

Nowhere do we find a clearer demonstration of this fallacy than in the experiments which are claimed to have “empirically” disproved the possibility of metaphysical free will.

Now, the supposedly ‘scientific’ debunker of free will may have valid philosophical reasons for rejecting the possibility of metaphysical free will’s existence. This possibility I will leave aside for now—the target of my argument is the false notion that his science, in and of itself, proves his philosophical conclusions true. The problem is that this only ever appears to him to be the case because he has interpreted his empirical findings through the filter of some philosophical assumptions, rather than others, in the first place—without owning up to it. 

In other words, what makes someone who commits the fallacy of scientism a fraud is that he first claims to be able to convert water into wine, and then when asked to demonstrate this magical ability, quietly pours wine instead of water into his water bottles in the first place (and is oblivious of the fact that he is doing so). When you pour wine into water bottles, it’s no surprise,  that after your demonstration of a magical chant, you end up with water bottles full of wine—and it’s no surprise when you filter a scientific finding through philosophical assumptions that after your argument is finished, you end up with something that “justifies” those philosophical assumptions’ truth. In neither case has anything actually been “demonstrated.”

In short, the only reason anyone can think that any “scientific” experiments so far have ever “scientifically” debunked the possibility of free will is because they actually have philosophical reasons for believing free will doesn’t exist which they aren’t owning up to honestly. These reasons may or may not be ultimately defensible, but if someone is trying to tell us that a scientific experiment has settled the question, they are simply smuggling their philosophy in past security illegitimately. In truth, the “scientific” experiments that have been conducted supposedly on the question of free will add nothing to the philosophical debate, and they have done more to distract us from the central questions than anything.

 _______ ~.::[༒]::.~ _______

Before continuing, I need to establish what I mean when I talk about “free will.” Specifically, I need to make it clear that despite the protests that may come from some, I am going to talk about the sort of “free will” that says that right up until the moment in which I make a free conscious decision, nothing in the previous physical state of the Universe determines what my choice is going to be; and at the moment in which I make my choice, I determine what that decision will be.

Whether or not this is the sort of “free will” that most of us feel as if we experience is an empirical question. And the question of whether this is how our conscious experiences feel is separate from the question of whether we actually do have this type of freedom.

The term for views which admit that this is the kind of freedom that we feel as if we have is “incompatibilism”. “Compatibilists”, by contrast, argue that the only kind of “freedom” that we either do want, or should want, is the kind of “freedom” involved when I choose to do what I want to do because I want to do it; and not, say, because someone is holding a gun to my head—even if my decision and my desire were absolutely set in stone and determined all the way back at the moment of the Big Bang, like ever so many falling dominoes.

While compatibilism basically names a single homogenous position on the question of free will, “incompatibilists” are split into two enemy camps: those who believe that we do have this significant kind of freedom (called “libertarians”), and those who believe we do not (called “hard determinists”).

In my view, while hard determinists are at least honest about the fact that their claim has reason to be unsettling to many ordinary people (because we do feel as if we have the power to make determining choices that are not, themselves, determined, and something about how we see what it means to be human will in fact be disturbed if this is all just one big illusion) and are willing to step up to the plate and argue that the consequences are worth it, “compatibilists” are simply hard determinists who try to weasel out of owning up to and defending themselves in light of these consequences by ignorantly denying—against the protests of anyone who claims otherwise—that anyone cares about the kind of freedom that would come from being able to make “metaphysically free” decisions at all.

The very fact that libertarians and hard determinists exist is all it actually takes to prove the compatibilists wrong: how can you claim that nobody really cares about the libertarian sort of free will when both people who agree with your underlying determinism and people who don’t are telling you that, as a matter of fact, they do care about it?

If that straightforward reasoning wasn’t enough, empirical investigations seem to have settled the question of whether this is how people feel once and for all. In the 2010 study Is Belief in Free Will a Cultural Universal, Sarkissian and colleagues examined ordinary peoples’ “intuitions about free will and moral responsibility in subjects from the United States, Hong Kong, India and Colombia.” Their results proved conclusively that outside of the isolated halls of philosophy departments, the “compatibilist” take that no one cares whether their choices are determined or not is not the norm: “The results revealed a striking degree of cross–cultural convergence. In all four cultural groups, the majority of participants said that (a) our universe is indeterministic and (b) moral responsibility is not compatible with determinism….” Sarkissian concludes that this research reveals “fundamental truth(s) about the way people think about human freedom.”

Again, a hard determinist can describe the way that our conscious experience of decision–making feels just as clearly as accurately and honestly as any libertarian, even as he turns around to deny that we actually have the kind of freedom we feel as if we have. In Free Will and Consciousness: A Determinist Account of the Illusion of Free Will, Gregg D. Caruso writes: “[C]ompatibilists cannot simply neglect or dismiss the nature of agentive experience. … [O]ur phenomenology is rather definitive. From a first–person point of view, we feel as though we are self–determining agents who are capable of acting counter–causally. … (W)e all experience, as Galen Strawson puts it, a sense of “radical, absolute, buckstopping up–to–me–ness in choice and actions”. …  When I perform a voluntary act, like reaching out to pick up my coffe mug, I feel as though it is I, myself, that causes the motion. We feel as though we are self–moving beings that are causally undetermined by antecedent events.”

So why does Caruso conclude that things cannot be as they seem? Quoting from a review, the problem with belief in free will is that it is “committed to a dualist picture of the self. … [And it, therefore,] involves a violation of physical causal closure (pp. 29-42).”

In other words, the argument that free will is impossible rests on the claim that defending a dualistic view of consciousness in general is impossible. Notice that this is ultimately a philosophical argument, and not one that is supposed to be proven as the direct conclusion of a scientific study. In fact, Caruso begins addressing these considerations as early as page 15, while he doesn’t begin to mention the scientific studies which are supposed to have addressed the subject until somewhere past page 100. Caruso’s account is one in which someone cannot believe in free will “without embarassment” because believing in it would require “giving up … atomistic physicalism”.

As usual, the advocates of “atomistic physicalism” make no attempt to shoulder the burden of demonstrating that the hypothesis that human conscious experience is composed of nothing other than blind atoms which themselves lack conscious experience and act blindly only as a passive response to inert causes could even conceivably be capable of allowing human conscious experience to be what it is—to put it in my terms, their claim is the equivalent of claiming one can draw a three dimensional figure on a two dimensional board. Instead, they are content to just demand that one can’t possibly deny that hypothesis “without embarassment” and then chop off anything about the nature our experiences which that hypothesis isn’t capable of explaining—no matter how debased and absurd the resulting picture of what it means to be a human being becomes.

Yet, as we’ve seen, the things we would have to chop off to make that hypothesis work end up including everything—because conscious experience quite simply couldn’t exist in the way that it irrefutably does if the “atomistic physicalist” were correct that the Universe at its root is made out of blind particles and forces, and nothing else, in exactly the same way that three–dimensional objects couldn’t exist if the world were a two–dimensional sheet.

My contention is that the only sane position one can hold is that consciousness itself is one of the things that the Universe is composed of “at its root” as well, and that we are free to posit that consciousness simply possesses properties like experientiality and intentionality as basic elements of what consciousness is, in exactly the same way that we are free to posit that electrons simply possess properties like spin and charge as basic elements of what an electron is—with no need of further explanation. All supposed ‘explanations’, after all, must stop somewhere. On the contrary, it is the “atomistic physicalist” who should be embarrassed to put forward the claim that one could even conceivably get qualitative subjective experiences, or intentionality, out of blind building blocks wholly lacking in either quality.

The existence of free will, unlike these, can at least coherently be denied in theory. But the arguments for throwing out the possibility of free will are identical to the arguments for throwing out intentionality, or subjective experience—and the existence of these features of consciously experienced reality can’t be denied without blatant incoherency. Thus, the arguments used to deny the possibility of the existence of free will fail even if they do not fail in the specific case of free will itself—and there remains no absolutist reason to deny the possibility that metaphysical free will could exist after all. The only remaining question, then, is whether further considerations happen to rule out the existence of human free will specifically.

 _______ ~.::[༒]::.~ _______

The story of supposedly “scientific” refutation of the possibility of free will begins in the 1980’s with a series of studies conducted by Benjamin Libet. Though now more than three decades old, these experiments still constitute the bulk of “scientific” analysis of the implausibility of free will.

In Sam Harris’ 2012 book Free Will, he writes:

“The physiologist Benjamin Libet famously used EEG to show that activity in the brain’s motor cortex can be detected some 300 milliseconds before a person feels that he has decided to move. Another lab extended this work using functional magnetic resonance imaging (fMRI): Subjects were asked to press one of two buttons while watching a “clock” composed of a random sequence of letters appearing on the screen. They reported which letter was visible at the moment they decided to press one button or the other. . . . One fact now seems indisputable: Some moments before you are aware of what you will do next—a time in which you subjectively appear to have complete freedom to behave however you please—your brain has already determined what you will do. You then become conscious of this “decision” and believe that you are in the process of making it.”

Daniel Wegner is one of the most prominent social psychologists known for his continuation of experiments aiming to prove this general sort of idea. In his discussion of Libet’s experiments in the 2002 The Illusion of Conscious Will, he explains the picture of the conscious mind’s role in reality that he still believes the Libet experiments are able to prove:

“Does the compass steer the ship? … [not] in any physical sense. The needle is just gliding around in the compass housing, doing no actual steering at all. It is thus tempting to relegate the little magnetic pointer to the class of epiphenomena — things that don’t really matter in determining where the ship will go. Conscious will is the mind’s compass.”

In other words, determinists who agree with Harris and Wegner believe that preceding unconscious brain events are the cause of both our future behaviors, and our later, illusory feeling of “choosing” those behaviors. It isn’t just that our experiences of choice are determined; it’s that they’re  completely superfluous to the chain of events that even lead to the actual execution of action—to them, the brain activity that can be spotted 300ms before you “decide” to flick your wrist in Libet’s experiment would cause you to flick your wrist, even if it didn’t cause you to feel like you were “deciding” to flick your wrist as an incidental step along the path towards that destination. To them, it isn’t just that our will isn’t “free” when it causes our actions—it’s that our will doesn’t cause our actions at all.

_______ ~.::[༒]::.~ _______

If anyone should allow himself to really sink down in to reinterpreting his moment–by–moment experiences in light of this idea, he will soon realize that it is an excellent recipe for producing the pathological state known as depersonalization. Indeed, according to these people, what a dysfunctional person in the depersonalized state experiences is actually a far closer reflection of reality than what the rest of us experience all the rest of the time. I think we should keep it very clearly in mind that what is at stake here is whether or not science has proven that a pathological state that tends to come comorbid with other pathologies like major depression and schizophrenia reveals fundamental truths about the reality of human consciousness that the rest of us live in illusory denial of.

To repeat the explanation in my words, the Libet–type experiments first have a subject sit down in front of a clock, while hooked up to an EEG (or fMRI). Then, they explicitly instruct that subject to perform some simple motor activity at random. Absolutely nothing is at stake in the decision; there is no goal to achieve, there are no values or variables to weigh or choose between, and no number of button presses or wrist–flicks is too high or too low. There is no way to “win,” there is no way to “fail,” and there are no alternative outcomes in the experiment for the subject to pick between. With absolutely no goals or constraints, subjects in these experiments are told to sit back and perform a perfectly purposeless motion at random for which they have absolutely no reason in principle to choose one moment over another.

Stop right there.

Keep this fact very clearly in mind: we’re using this study to evaluate free will.

Now, ask yourself: does this sort of scenario even seem relevant at all to free will?

Let’s get back into the first–person position on these experiments.

If you agree to join in Libet’s experiment, what are you going to feel?

Imagine I have just told you to repeat Libet’s experiment—that I’ve just said to you: “I want you to sit back, and whenever you feel like it, I want you to flip your wrist over. Then, I want you to do it again. And keep doing it until I tell you to stop.”

What is that going to feel like?

It is immediately obvious that this does not even feel like an exercise of free will.

In fact, it may have felt like an exercise of free will to decide whether or not to join Libet’s experiment at all, or else spend my day doing something else instead. But once I’ve sat down and consented to follow Libet’s instructions, what does my mental activity consist of?

It consists, primarily, of waiting. For what? An urge to move my hand.

To do what? To appear.

In other words, when I sit down and consent to follow Libet’s instructions, I have already made the conscious decision to place myself into a specific, and very peculiar, state of consciousness. I have cleared my mind. I am focusing all of my conscious attention onto my hand. And it is as if I’ve consciously chosen to initiate an automated “program” which orders my subconscious to generate the sensation of an urge to move—at random—while simultaneously holding the intention to act on that sensation, after it appears. I have made the decision to set myself into this state of consciousness, and I am actively holding myself in it for the purposes of this experiment.

Is it not precisely part of my very experience itself that in a case like this, a sensation that feels like a spontaneous “urge” does in fact appear before I make the decision to move?

Of course it is.

So is it any surprise at all to find that brain activity of some sort can be found flickering prior to the time at which I consciously register making the decision to flip my wrist? I don’t think it is. In fact, I think generalizing from a case like this to the conclusion that our decisions in general are determined by subconscious processes before we ever feel as if we’re deciding to make them is downright goddamn idiotic. Sheer introspection alone leads us to expect that we would see brain activity appear prior to our decision to flip our wrists over, because participating in Libet’s experiment would feel exactly like placing myself in the conscious state of waiting for a particular kind of sensation to surface into my conscious awareness before acting.

Libet’s experiment would feel like that. Ordinary exercises of what we feel to be our free will to decide do not. So the simplest conceptual analysis of what would happen in an experiment like the one Libet designed is already enough to establish that these experiments quite simply have no bearing on the matter of free will at all.

So here is the crux: when the Libet study’s interpretors decide to label the preceding brain activity as “the subject’s soon–to–be ‘consciously willed’ decision in a deterministic process of turning into a “decision” under the surface outside of the subject’s conscious mind” rather than “the urge the subject has consciously ordered his subconscious to randomly generate appearing exactly as cued”, that is not science. That is, in fact, philosophical, in that it makes a call about how to bridge subjective aspects of our first–person experience with outward results of third–person observation which cannot be traveled by empirical investigation unaided.

And not only is it a philosophical call—it’s a bad one. 

But the fallacy of scientism goes so unchallenged by the modern mind that for the most part, few people commenting on the Libet experiments have noticed even something that should have been this simple and basic and rudimentary and obvious a hell of a long time ago.

 _______ ~.::[༒]::.~ _______

There are plenty of other disqualifying technical problems with Libet’s experiment, besides. For example, Libet was able to determine that the “readiness potential” preceded the decision to act because he programmed a computer to record the preceding few seconds of brain activity in response to a subject’s muscle activity. In other words, from the very first moment, he never had a damn clue how often “readiness potentials” appeared and did not trigger muscle movement, because what Libet did not do is keep a continuous record of their brain activity, to prove that a “readiness potential” always produced movement; rather, that activity was only recorded in retrospect, when the subject actually moved, and at no other time.

Further studies have made it clear that this was, in fact, a significant problem for Libet’s conclusions: in 2015, a team led by Prof. Dr. John-Dylan Haynes created a video game that would have a subject face off against a computer enemy which was programmed to react in advance to the intention to move as indicated by the human player’s “readiness potentials” (Point of no return in vetoing self-initiated movements). If “readiness potentials” were deterministic, the computer would always be able to predict the human player’s movements in advance and would therefore always win. If they weren’t, then the human player would be able to adapt to the computer’s pre–emptive response by changing his plan mid–course.

And, in fact, that was what the team found.

“A person’s decisions are not at the mercy of unconscious and early brain waves. They are able to actively intervene in the decision-making process and interrupt a movement,” says Prof. Haynes. “Previously people have used the preparatory brain signals to argue against free will. Our study now shows that the freedom is much less limited than previously thought.”

Here’s another problem: in the Libet experiments, the “readiness potential” appeared 550ms (just over half a second) before muscle movement. But here’s what happens if you tell someone to perform a physical action in reaction to a sound: it only takes 230ms, per Haggard and Magno 1999, for someone to decide to perform an action in response to a cue. We therefore know that conscious decisions can be made in less than a quarter of a second. And if conscious decisions can be made in less than a quarter of a second, what basis do we have to assume that something happening a whole half of a second before a decision is made in some other cases is the neurological determinant of the decision itself?

We shouldn’t.

But what’s interesting about these problems is that they would all be entirely unnecessary to go to the trouble to even explore in the first place if anyone had simply paid closer attention to analyzing the notion of Libet’s study design conceptually—a simple momentof clarification of some of the most basic philosophical issues at play in an experiment designed like this could have saved us a lot of wasted time. It would have been clear from the outset what was probably going on.

 _______ ~.::[༒]::.~ _______

In the decades since Libet’s original work, has better evidence come along to support his conclusions? Sam Harris immediately followed up with a statement about Libet with a description of “another lab [that] extended this work using functional magnetic resonance imaging (fMRI)….” The lab he refers to is Chun Siong Soon’s[1], and the summary of the 2008 study published in Nature Neuroscience can be seen here.

While the activity measured in this study was still, as before, purposeless, with no goals or constraints, it did change one substantial thing. According to the way Soon (et al.) summarized their own research—in a summary paper titled “Unconscious Determinants of Free Decisions in the Brain”—

“There has been a long controversy as to whether subjectively ‘free’ decisions are determined by brain activity ahead of time. We found that the outcome of a decision can be encoded in brain activity of prefrontal and parietal cortex up to 10 s before it enters awareness.”

The actual point this new study was supposed to add to the already–existent debate was that it was supposed to establish the capacity of these scientific measurements to predict not just the general timing of a single choice, but now in fact which of two—count them, two!—equally meaningless choices the subject would choose between. And the conclusions we are supposed to draw from this are, again, wide–reaching—returning to the summary from Harris:

“One fact now seems indisputable: Some moments before you are aware of what you will do next—a time in which you subjectively appear to have complete freedom to behave however you please—your brain has already determined what you will do. You (only) then become conscious of this “decision” and believe (falsely) that you (“you”) are in the process of making it.”

What do the particular new facts drawn by this study really add to the picture?

There is one thing that neither Harris’ reference to this study, nor Soon (et al.)’s own summary of it in Nature Neuroscience, will clearly tell you—quoting Alfred Mele:

“ … the predictions are accurate only 60 percent of the time. Using a coin, I can predict with 50–percent accuracy which button a participant will press next. And if the person agrees not to press a button for a minute (or an hour), I can make my predictions a minute (or an hour) in advance. I come out 10 points worse in accuracy, but I win big in terms of time. So what is indicated by the neural activity that Soon and colleagues measured? My money is on a slight unconscious bias toward a particular button—a bias that may give the participant about a 60–percent chance of pressing that button next.”

Notably, this 60–percent figure is a drop from a predictive value of 80–90% in cases where the moment chosen to commit a single predefined action like Libet’s wrist–rotating is what is being predicted. Even with the increased understanding of neurophysiology developed over the past handful of decades, and even with refined neuroimaging techniques, the predictive power of the “readiness potential” in this study still immediately drops by 20%—down to little over chance*—with even a slight shift of the design of the experiment towards something that comes just ever so marginally closer to resembling the kinds of decisions in which we actually deliberate—and feel as if we deliberate freely—over a choice.  (*Remember, you’d have about 50% accuracy if you were just guessing, so 60% is even less impressive than it sounds at a glance, because you should be comparing that 60% accuracy to a baseline of 50%)

But yet again, even if the predictive value of the “readiness potential” in these expanded cases were 100%, why should even that have concerned me? When I go into Soon’s laboratory, I am walking in deliberately setting the conscious intention in advance to sit back and think about nothing other than letting myself push either one or the other button at random. Absolutely nothing weighs on the decision; I am by definition putting myself in the peculiar conscious state of waiting to act on a random urge which I have no reason for caring about. Even with this meaningless “choice” between two absolutely meaningless options added to the scenario, it doesn’t even feel like the kind of deliberation in which I feel as though I possess the power to do otherwise. In the case of Soon’s experiment, just like Libet’s, participating would feel exactly like waiting for some sensation to rise up into conscious awareness out of my subconscious, at which point I have already set the intention to act on it when—meaning after—it appears.

So even a study design like Soon’s would have nothing to say about free will even if it found that it could predict my decision 100% of the time (because perhaps all the brain scans are identifying is the appearance of the impulse–sensation that I’ve walked into Soon’s lab agreeing to sit and wait for). But the meager results of these studies turn out to be even less impressive than that. By far.

 _______ ~.::[༒]::.~ _______

As I said in the opening chapter of this series,

At these stages of argument, it should not be mistaken that I am ever arguing that the reason we should reject a physicalist account is just because it dehumanizes us (in the sense of “making us feel dehumanized,” or at least being something which arguably should). Rather, if a physicalist account should be rejected, it should be rejected first and foremost because it either explicitly denies, or else by failing to be able to account for them implicitly denies some parts of what we really, truly, in fact and in reality, actually are. However, an intrinsically connected component piece of this picture is that if an account does explicitly or implicitly deny some aspect of what we really are, then believing an objectively impoverished account of the world may lend itself to a subjectively impoverished internal or relational life.

Believing in the claim of solipsism, for example (e.g., that my subjective experience is the only one that truly exists in the world, whereas everyone else is something like a figment of my imagination, lacking actual internal experiences completely, so that life is quite like a computer game in which everyone else is artificially computer generated while I am the only actual player) would—first and foremost—be a philosophical mistake. However, we would be justified to oppose that mistake both because of the objective, abstract errors that it commits as well as, simultaneously the internal, emotional, and social consequences that would likely result from someone’s believing it: the two are, in other words, not necessarily separable—solipsism would have these consequences because of its mistakes, and those mistakes are important because of the consequences. Where arguments for the socially or psychologically detrimental consequences of physicalist accounts are made, they should not be mistaken for emotional appeals to consequences which simply argue that we must believe these accounts are false because we shouldn’t want them to be true; we have (so I will claim) all the demonstrable reasons for believing them false we should need. But if accounts of the world and the self are factually impoverished, they will arguably lead to an impoverished relationship to the world and to the self and others in consequence, and we can oppose them for both reasons at the same time.

The point extends into our present discussion of free will.

Not only is it the case, as previously noted, that the majority of respondents from the United States to India to Colombia believe that “moral responsibility is not compatible with determinism”; it actually has been recorded repeatedly that altering someone’s belief in free will impacts their moral behavior.

In 2008, Kathleen D. Vohs and Jonathan W. Schooler found that prompting participants with a passage from The Astonishing Hypothesis (in which the researcher Francis Crick writes, “You, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. Who you are is nothing but a pack of neurons.”) made them significantly more likely to cheat on a math test.

In their first experiment, “cheating” involved failure to press the space bar on a keyboard at an appropriate time—so in order to rule out the possibility that disbelief in free will simply made participants more passive in general, they conducted a second experiment in which “cheating” would involve active behavior (namely, overpaying themselves for providing correct answers to a multiple choice test). Going even further, the second experiment also tested the impacts of increasing participants’ belief in free will. And once again, those whose belief in free will was strengthened cheated less, while those whose belief in free will was undermined cheated more.

In 2009, Roy F. Baumeister and colleagues expanded this line of research further. In a first experiment, participants were presented with hypothetical scenarios and asked how they felt about helping individuals described as being in need—and those who were prompted with disbelief in free will were significantly less likely to help. The second experiment offered participants a description of a fellow student whose parents had just been killed in a car accident, and then presented them with an actual opportunity to volunteer to help—those who were prompted with disbelief in free will were still significantly less likely to volunteer here even when the situation actually became real.

Finally, participants in the third experiment were told they were helping the experimenter prepare a taste test to be consumed by an anonymous stranger while being given a list of foods the stranger liked and disliked. This list explained that the stranger hated hot foods most of all—and participants, after being sorted into groups prompted with various beliefs about free will, were judged according to how much hot sauce they poured onto the stranger’s crackers. Participants who were told that free will doesn’t exist before the experiment gave the taste–tester twice as much hot sauce as those who read passages supporting the ideas of free choice and moral responsibility.

Jonathan Schooler, writing in Free Will and Consciousness: How Might They Work? explains:

“One possibility is that reflecting on the notion that free will does not exist is a depressing activity, and that the results are simply the consequence of increased negative affect. However, both Vohs and Schooler and Baumeister et al. assessed mood and found no impact of the anti–free will statements on mood, and no relationship between mood and prosocial behavior. … Baumeister et al. argue that the absence of an impact of anti–free will sentiments on participants’ reported accountability and personal agency argues against a role of either of these constructs in mediating the relationship between endorsing anti–free will statements and prosocial behavior. … [But] just as priming achievement–oriented goals can influence participants’ tacit sense of achievement without them explicitly realizing it (Bargh, 2005), so too might discouraging a belief in free will tacitly minimize individuals’ sense of accountability or agency, without people explicitly realizing this change.”

And so, as an empirical matter of fact, what happens when you give people an ideological license to loosen their senses of accountability and agency, they find excuses to be assholes. 

“ … We are always ready to take refuge in a belief in determinism if [our] freedom weighs upon us
or if we need an excuse.” — Jean–Paul Sartre

 _______ ~.::[༒]::.~ _______

A bizarre series of intellectual double standards underlines the equivalent attempt to defend the value of spreading belief in determinism. Determinists have long rested on the supposed immorality of retribution to stake their claim that spreading belief in determinism should help create a more “ethical” world. As the story goes, we only want to see someone who commits a moral offense suffer for the sake of suffering because we believe that they “freely chose” to act as they did. Supposing someone commits a public act of violent rape, then if we assume that he was beyond all capacity for control of his impulses, we’ll want to help him not do that again instead of punish him. Thus, many liberals hope that spreading belief in determinism would help create a public consensus for shifting the motivations upon which the criminal justice system is centered away from retribution, and towards rehabilitation instead.

But why should that follow? If the violent criminal is without any deep moral form of guilt for his act because he has no deep moral responsibility for anything at all, then I too am without any deep moral form of guilt when I desire to see him violently punished for it—I hold no deep moral responsibilities for my actions or desires either, after all, so why shouldn’t I “excuse” myself for wanting to see him severely punished in just exactly the way that I “excuse” him for his act of rape? The determinist can give no reason—or at least not one that actually requires belief in metaphysical determinism.

In The Atheist’s Guide to the Universe, Alex Rosenberg argues that “the denial of free will is bound to make the consistent thinker sympathetic to a left–wing, egalitarian agenda about the treatment of criminals and of billionaires.” But why should it do that? Naively, Rosenberg thinks that if we conclude that criminals do not deserve to suffer and that billionaires do not deserve to reap the benefits of wealth because there is no such thing as “deserving” in the moral sense because there is no such thing as free will, then it follows that we will want to be nice to criminals and redistribute the wealth of billionaires.

What’s overlooked in this is that if there is no such thing as “deserving”, then criminals do not “deserve” to remain free in the society they’ve committed harms against any more than they “deserve” to be punished by it. It’s not as if the fact that they don’t “deserve” to be punished entails that they do “deserve” not to be, because when we eliminate the entire concept of “deserving” by eliminating free will, we aren’t objecting to one isolated claim that someone in a particular circumstance deserves a particular thing; we’re eliminating all such claims. Likewise, if determinism is true, then billionaires may not “deserve” their wealth; but they also do not “deserve” to have their wealth taken away from them, and the general public does not “deserve” to have the wealth that billionaires have created given to them either. Only if free will does exist—and there are some things that individuals hold more or less responsibility for—in differing degrees in different cases—can we reasonably talk about who “deserves” what at all. 

Finally, Sam Harris makes the rather utopian claim that promoting belief in determinism should allow us to rid the world of hatred entirely. And in response to those who “say that if cutting through the illusion of free will undermines hatred, it must undermine love as well”, he responds:

“Seeing through the illusion of free will does not undercut the reality of love … loving other people is not a matter of fixating on the underlying causes of their behavior. Rather, it is a matter of caring about them as people and enjoying their company. We want those we love to be happy, and we want to feel the way we feel in their presence.

But hatred, he says, in contrast,

is powerfully governed by the illusion that those we hate could (and should) behave differently. We don’t hate storms, avalanches, mosquitoes, or flu. We might use the term “hatred” to describe our aversion to the suffering these things cause us—but we are prone to hate other human beings in a very different sense. True hatred requires that we view our enemy as the ultimate author of his thoughts and actions. Love demands only that we care about our friends and find happiness in their company.”

Wait a second.

Couldn’t everything Harris just said to justify his claim about hatred apply to love, too?

In fact, we could reverse everything that Harris just said about both love and hatred, and his statements would seem exactly as “rational” as they did before. Consider how it would sound:

“Hating other people is not a matter of fixating on the underlying causes of their behavior. Rather, it is a matter of not caring about them as people and not enjoying their company. We want those we hate to be unhappy if we can’t avoid their loathsome presence.

But love? Love is powerfully governed by the feeling that those we love choose to be who they are. We don’t love ice cream, video games, mosquitoes, or getting over a flu. We might use the term “love” to describe our attraction to the pleasure these things cause us—but true personal love goes deeper in a very significant way. True love requires that we view those we love as the ultimate author of their thoughts and actions. Hatred demands only that we feel the fleeting desire to cause someone unhappiness.”

I think it is clear that the half of Harris’ argument that should be granted is that belief in free will is necessary in order to “truly hate.” However, just as Harris’ distinction between true hatred and hyperbolic ‘hatred’ holds, so does a distinction between true love and hyperbolic ‘love.’ And just as Harris’ determinism only allows room for hyperbolic ‘hatred’ but not the “real” kind, so it only allows room for hyperbolic ‘love’—where the sense in which I “love” my wife is no different in kind from the sense in which I “love” owning a new pair of pants or buying a new iPod. And as Dan Jones writes, the same necessarily goes for principles like forgiveness and gratitude:

“Harris believes that true hatred — the kind we direct towards evildoers, as opposed to mere dislike — implies an untenable view of human behaviour, in that it depends on an incoherent concept of free will. The same must go for forgiveness. It would be daft to talk of forgiving a mountain for an avalanche, but for Harris it must be equally daft to talk of true forgiveness among humans — for what is there to forgive in a deterministic system, whether a mountain or human?

The same goes for gratitude. You might be thankful that a mountain provided good slopes for skiing one day, but that’s not the true gratitude you show to your friend for teaching you how to ski in the first place. This true gratitude must too fall beneath Harris’s deterministic sword: what is there to thank in a deterministic system, mountain or human?”

 _______ ~.::[༒]::.~ _______

However, there is an even more fundamental issue left to discuss.

The physicalist’s claim that we should accept the social value of spreading belief in determinism is actually destroyed on an even more meaningfully deep level by the fact that if physicalism were true, it would be incoherent to say that our beliefs ever impact our behavior at allThe only paradigm that can even accommodate the notion that beliefs, as such, could possibly hold their own independent impact on our behavior is one that gives consciousness itself an independent causal role in behavior.

This is because, on physicalism, there are precisely three possible answers (or pseudo–answers) for explaining the relationship between my consciously held “belief” and whatever physical properties of my brain most closely correlate with changes in my consciously held “beliefs”: identity theory, epiphenomenalism, and eliminativism.

Eliminativism would say that there are, in fact, no such thing as “beliefs” at all—there are only physical systems linked up in such a way that when this one part moves this way, it causes that part to move that way in a sheer physical series of causative events. Recall the statement from Alex Rosenberg we explored the implications of in the entry on intentionality:

Suppose someone asks you, “What is the capital of France?” Into consciousness comes the thought that Paris is the capital of France. Consciousness tells you in no uncertain terms what the content of your thought is, what your thought is about. It’s about the statement that Paris is the capital of France. That’s the thought you are thinking. It just can’t be denied. You can’t be wrong about the content of your thought. You may be wrong about whether Paris is really the capital of France.

The French assembly could have moved the capital to Bordeaux this morning (they did it one morning in June 1940). You might even be wrong about whether you are thinking about Paris, confusing it momentarily with London. What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.

It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all. The brain can’t have thoughts about Paris, or about France, or about capitals, or about anything else for that matter. When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong.

Don’t misunderstand, no one denies that the brain receives, stores, and transmits information. But it can’t do these things in anything remotely like the way introspection tells us it does—by having thoughts about things. The way the brain deals with information is totally different from the way introspection tells us it does. Seeing why and understanding how the brain does the work that consciousness gets so wrong is the key to answering all the rest of the questions that keep us awake at night worrying over the mind, the self, the soul, the person.

We believe that Paris is the capital of France. So, somewhere in our brain is stored the proposition, the statement, the sentence, idea, notion, thought, or whatever, that Paris is the capital of France. It has to be inscribed, represented, recorded, registered, somehow encoded in neural connections, right? Somewhere in my brain there have to be dozens or hundreds or thousands or millions of neurons wired together to store the thought that Paris is the capital of France. Let’s call this wired-up network of neurons inside my head the “Paris neurons,” since they are about Paris, among other things. They are also about France, about being a capital city, and about the fact that Paris is the capital of France. But for simplicity’s sake let’s just focus on the fact that the thought is about Paris.

Now, here is the question we’ll try to answer: What makes the Paris neurons a set of neurons that is about Paris; what make them refer to Paris, to denote, name, point to, pick out Paris? To make it really clear what question is being asked here, let’s lay it out with mind-numbing explicitness: I am thinking about Paris right now, and I am in Sydney, Australia. So there are some neurons located at latitude 33.87 degrees south and longitude 151.21 degrees east (Sydney’s coordinates), and they are about a city on the other side of the globe, located at latitude 48.50 degrees north and 2.20 degrees east (Paris’s coordinates).

Let’s put it even more plainly: Here in Sydney there is a chunk or a clump of organic matter—a bit of wet stuff, gray porridge, brain cells, neurons wired together inside my skull. And there is another much bigger chunk of stuff 10,533 miles, or 16,951 kilometers, away from the first chunk of matter. This second chunk of stuff includes the Eiffel Tower, the Arc de Triomphe, Notre Dame, the Louvre Museum, and all the streets, parks, buildings, sewers, and metros around them. The first clump of matter, the bit of wet stuff in my brain, the Paris neurons, is about the second chunk of matter, the much greater quantity of diverse kinds of stuff that make up Paris. How can the first clump—the Paris neurons in my brain—be about, denote, refer to, name, represent, or otherwise point to the second clump—the agglomeration of Paris? A more general version of this question is this: How can one clump of stuff anywhere in the universe be “about” some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?

But whether Rosenberg can incorporate it into his theory or not, that our thoughts are “about” concepts and ideas is the one thing we can’t deny. If the notion that the world is nothing but “chunks of matter” is a notion that can’t account for the fact that this is so, then it is that notion, and not our belief that we have thoughts “about” things, that must go. (Again, I elaborate on this further in entry V).

The next approach a physicalist might attempt is identity theory. For us to be able to differentiate an identity theory about beliefs with an eliminativist perspective, this perspective would have to grant that our thoughts and mental states are “about” things—but that they are also identical to certain chunks of matter, nonetheless.

The first problem with that style of approach is this: everything Rosenberg just said is true—he has correctly reasoned from his opening premises. If everything is just “chunks of matter”, then it is incoherent that one “chunk of matter” could be “about” some other “chunk of matter” in some other part of the universe. And as we also saw in the entry on intentionality, the project of “building” the intentionality of the conscious human mind out of any sort of proto–intentionality just fails; there’s no way, even in principle, to do it. You can’t cross that bridge by steps any more than you can cross the bridge from drawing on a two–dimensional canvas to creating a three–dimensional figure by a series of steps of lines drawn on that canvas—and you don’t have to spend eternity testing every possible pattern of lines to figure this out; if you pay attention closely enough, you should be able to see that this is impossible in principle. But it can help to draw a few case studies of what some of the attempts have looked like in order to gain a closer intuitive grasp on where the bridge is that can’t be crossed—as, again, we saw in entry V.

The second problem, which is ultimately just the first approached from the opposite side of the same gap, is one we can see with a thought experiment originally presented by Laurence BonJour. As he wrote:

Suppose then that on a particular occasion I am thinking about a certain species of animal, say dogs — not some specific dog, just dogs in general (but I mean domestic dogs, specifically, not dogs in the generic sense that includes wolves and coyotes). The Martian scientist is present and has his usual complete knowledge of my neurophysiological state. Can he tell on that basis alone what I am thinking about? Can he tell that I am thinking about dogs rather than about cats or radishes or typewriters or free will or nothing at all? It is surely far from obvious how he might do this. My suggestion is that he cannot, that no knowledge of the complexities of my neurophysiological state will enable him to pick out that specific content in the logically tight way required, and hence that physicalism is once again clearly shown to be false.

[. . .]

Suppose then, as seems undeniable, that when I am thinking about dogs, my state of mind has a definite internal or intrinsic albeit somewhat indeterminate content, perhaps roughly the idea of a medium-sized hairy animal of a distinctive shape, behaving in characteristic ways. Is there any plausible way in which, contrary to my earlier suggestion, the Martian scientist might come to know this content on the basis of his neurophysiological knowledge of me? As with the earlier instance of the argument, we may set aside issues that are here irrelevant (though they may well have an independent significance of their own) by supposing that the Martian scientist has an independent grasp of a conception of dogs that is essentially the same as mine, so that he is able to formulate to himself, as one possibility among many, that I am thinking about dogs, thus conceived. We may also suppose that he has isolated the particular neurophysiological state that either is or is correlated with my thought about dogs. Is there any way that he can get further than this?

The problem is essentially the same as before. The Martian will know a lot of structural facts about the state in question, together with causal and structural facts about its relations to other such states. But it is clear that the various ingredients of my conception of dogs (such as the ideas of hairiness, of barking, and so on) will not be explicitly present in the neurophysiological account, and extremely implausible to think that they will be definable on the basis of neurophysiological concepts. Thus, it would seem, there is no way that the neurophysiological account can logically compel the conclusion that I am thinking about dogs to the exclusion of other alternatives.

[. . .]

Thus the idea that the Martian scientist would be able to determine the intrinsic or internal contents of my thought on the basis of the structural relations between my neurophysiological states is extremely implausible, and I can think of no other approach to this issue that does any better. The indicated conclusion, once again, is that the physical account leaves out a fundamental aspect of our mental lives, and hence that physicalism is false.

As Bill Vallicella summarizes the argument,

BonJour is thinking about dogs. He needn’t be thinking about any particular dog; he might just be thinking about getting a dog, which of course does  not entail that there is some particular dog, Kramer say, that he is thinking about getting.   Indeed, one can think about getting a dog that is distinct from every dog presently in existence!  How?  By thinking about having a dog breeder do his thing.  If a woman tells her husband that she wants a baby, more likely than not, she is not telling him that she wants to kidnap or adopt some existing baby, but that she wants the two of them it engage in the sorts of conjugal activities that can be expected to cause a baby to exist.

BonJour’s thinking has intentional content. It exhibits that aboutness or of-ness that recent posts have been hammering away at.  The question is whether the Martian scientist can determine what that   content is by monitoring BonJour’s neural states during the period of time he is thinking about dogs. The content before BonJour’s mind has various subcontents: hairy critter, mammal, barking animal, man’s best  friend . . . . But none of this content will be discernible to the neuroscientist on the basis of complete knowledge of  the neural states, their relations to each other and to sensory input and behavioral output. Therefore, there is more to the mind than what can be known by even a completed neuroscience.

So whatever the relationship between ‘beliefs’ as I consciously experience them and the physical state of my brain might be—however close that relationship might be—it is just flatly incoherent to claim that the two things are “identical” (for even more on that, see here). We can see that whether we conceptually analyze what it means for something to be a belief, and then reason backwards to see whether something with those attributes could be built out of something possessing only the kinds of attributes that blind physical forces do (this is how Rosenberg arrives at the, er, belief that beliefs do not exist), or we approach the divide from the opposite direction and imagine ourselves looking into the physical dimensions of the activity of the brain in the attempt to find an ‘idea’.

And that leaves just one final option remaining for the physicalist: epiphenomenalism. But epiphenomenalism about beliefs fails for exactly the same reasons that epiphenomenalism about qualia does: namely, that if it were true, we would necessarily be utterly incapable in principle of forming the concept of epiphenomenalism in the first place. Recall our earlier description of why epiphenomenalism about qualia fails:

One of the easiest ways to explain an epiphenomenalist relationship is by example. If you stand in front of a mirror and jump up and down, your reflection is an epiphenomena of your actual body. What this means is that your body’s jump is what causes your reflection to appear to jump—your body’s jump is what causes your real body to fall—and your body’s fall is what causes your reflection to appear to fall. It may seem to be the case that your reflection’s apparent jump is what causes your reflection to apparently fall, but this is purely an illusion: your reflection doesn’t cause anything in this story; not even its own future states. If we represent physical states with capital letters, states of experience with lower–case letters, and causality with arrows, then a diagram would look something like this:


Thomas Huxley, not the first to espouse the view but the first to give it a name, described it by saying that consciousness is like the steam–whistle sound blowing off of a train that contributes nothing to the continued motion of the train itself. We shouldn’t fail to realize how extreme the dehumanization of this view is, even still, despite the fact that it acknowledges conscious experiences as real: if this is true, then nobody ever chooses a partner because they are experiencing love; nobody ever fights someone because they are experiencing anger; nobody ever even winces because they are experiencing pain. Rather, a blind inert physical state moves by causal necessity from one state to the next; and it is the meaningless motion of these blind inert forces by causal necessity that explains everything—conscious experiences just happen to incidentally squirt out over the top of these motions as a byproduct, and you are, in effect, a prisoner locked inside the movie in your head with your arms and legs removed and absolutely no influence or control whatsoever over what does or does not happen inside of it. In the words of Charles Bonnett writing in 1755, “the soul is a mere spectator of the movements of its body.”

I would ask you to contemplate the severity of what might result if someone were to actually take this proposal seriously and really honestly begin to look at life and their own conscious existence in this horrific and dehumanized way, but according to the claim of epiphenomenalism, believing that epiphenomenalism is true never has any causal effect on anyone’s physical behavior—nor on any of their future mental states—in the first place either. A series of blind, inert physical events leads to their brain responding physically to the input of symbols and lines (and it is only a mere epiphenomena of this that they have any experience of “understanding their meaning,” but any “ideas” contained therein—as such—would simply in principle have no ability to play any further causal role in anything further whatsoever, either of the individual’s future conscious beliefs or their future physical behavior); and from here a purely physical sequence of physical causation leads to further physical states (which then happen to give off more epiphenomena in turn). On this view, the fact that pain even feels painful” is a mere coincidence; for it is not because we feel pain and dislike it that we ever recoil away from a painful stimuli: one physical brain event produces another, and it is only a mere unexplained coincidence that what the first physical brain event happens to give off like so much irrelevant steam is a feeling that just so happens to be painful in particular. 

It literally could just as well have been the case that slicing into our skin with a knife would produce the sensation that we currently know in the world as it is as “the taste of strawberries,” and the physical world (according to epiphenomenalism) would proceed in just exactly the same way as it does now. This would be true because: (1) epiphenomenalism admits that conscious experiences are something over and above physical events, and we do not know why particular conscious experiences are linked with particular physical events (since the former are not logically predictable from the latter given that claims that it “emerges” are acknowledged by definition by epiphenomenalism to fail), and (2) none of them play any causal role in anything anyway. Our conscious lives could have consisted of one long feeling orgasm, or one long miserable experience of pain, or one long sounding “C” note combined with the taste of blueberries and a feeling of slight melancholy, and again, everything in the physical universe would have proceeded in exactly the same way it does now. And it is only a coincidence of whatever extra rule specifies that particular conscious experiences superfluously ‘squirt out’ and dissipate into the cosmic aether like steam that our world happens to be otherwise.

Unfortunately, while most people—including philosophers—are content to stop here and reject the view for sheer counter–intuitiveness alone, philosophy of mind has been somewhat lazy at producing actual logical objections to it. Actual refutations of epiphenomenalism often aren’t very well known, but there is one that is absolute and undeniable and refutes even the possibility that anything like epiphenomenalism could possibly be true completely once and for all. That is: if epiphenomenalism were true, no one would ever be able to write about it. In fact: no one would ever be able to write—nor think—about consciousness in general. No one would ever once in the history of universe have had a single thought about a single one of the questions posed by philosophy of mind. Not a single philosophical position on the nature of consciousness, epiphenomenalist or otherwise, would ever have been defined, believed, or defended by anyone. No one would even be able to think about the fact that conscious experiences exist.

And the reason for that, in retrospect, is quite plain to see: on epiphenomenalism, our thoughts are produced by our physical brains. But our physical brains, in and of themselves, are just machines—our conscious experiences exist, as it were in effect, within another realm, where they are blocked off from having any causal influence on anything whatsoever (even including the other mental states existing within their realm, because it is some physical state which determines every single one of those). But this means that our conscious experiences can never make any sort of causal contact with the brains which produce all our conscious thoughts in the first place. And thus, our brains would have absolutely no capacity to formulate any conception whatsoever of their existence—and since all conscious thoughts are created by brains, we would never experience any conscious thoughts about consciousness. For another diagram, if we represent causality with arrows, causal closure with parentheses, physical events with the letter P and experiences with the letter e, the world would look something like this:

… e1 ⇠ (((P⇆P))) ⇢ e2 …

Everything that happens within the physical world—illustrated by (((P⇆P)))—would be wholly and fully kept and contained within the physical world, where conscious experiences as such do not reside; the physical world is Thomas Huxley’s train which moves whether the whistle on top blows steam or not. And e1 and e2 float off of the physical world—for whatever reason—and then merely dissipate into nothingness like steam, with no capacity in principle for making any causal inroads back into the physical dimension of reality whatsoever. This follows straightforwardly as an inescapable conclusion of the very premises which epiphenomenalism defines itself by. But since the very brains which produce all our experienced thoughts are contained within (((P⇆P))), in order to have any experienced thought about conscious experience itself, these (per epiphenomenalism) would have to be the epiphenomenal byproducts of a brain state that is somehow reflective or indicative of conscious experience. But brain states, again because per epiphenomenalism they belong to the self–contained world inside (((P⇆P))) where no experiences as such exist, are absolutely incapable in principle of doing this.

To refer back to our original analogy whereby epiphenomenalism was described by the illustration of a person jumping up and down in front of a mirror, then: it would be as if the mirror our brains were jumping up and down in front of were shielded inside of a black hole in a hidden dimension we couldn’t see. Our real bodies [by analogy, our physical brains] would never be able to see anything happening inside that mirror. And therefore, they would never be able to think about it or talk about it. And therefore, we would never see our reflections [by analogy, our consciously experienced minds] thinking or talking about the existence of reflections, because our reflections could only do that if our real bodies were doing that, and there would be absolutely no way in principle that our real bodies ever could.

The fact that we do this, then—the fact that we do think about consciousness as such, and the fact that we write volumes and volumes and volumes and volumes philosophizing about it, and the very fact that we produce theories (including epiphenomenalism itself) about its relation to the physical world in the first place—proves absolutely that whatever the mechanism may be, conscious experiences somehow most absolutely do in fact have causal influence over the world. What we have here is a rare example of a refutation that proceeds solely from the premises of the position itself, and demonstrates an internal inconsistency.

But Jaegwon Kim has already identified all the possible options for us! Either experiences and physical events are just literally identical (which even Kim himself rejects, for good reasons we have outlined here), or else epiphenomenalism is true (which Jaegwon Kim accepts, but which the simple argument outlined just now renders completely inadmissible)—or else the postulate of the causal closure of the physical domain is false—and conscious experience is both irreducible to and incapable of being explained in terms of blind physical mechanisms, and possesses unique causal efficacy over reality all in its own right.

 What goes for the failure of epiphenomenalism about qualia goes just the same for epiphenomenalism about beliefs. It’s not just that epiphenomenalism would necessarily have to remove any causal role from the belief as such out of the picture; it’s that on any assumption of any world that worked that way, it would be impossible on principle for any of its inhabitants to ever form the very belief that their consciously held beliefs are outside of the causal nexus of the physical world—because all of the causally potent material brain events that squirt out these causally impotent consciously experienced “beliefs” would be happening inside of the causal nexus that consciously held beliefs, per se, can never in principle causally interact with because they are locked in principle outside of that nexus. Thus, we could never have any consciously experienced  beliefs about our consciously experienced beliefs (or about their relationship to the rest of reality) at all. But the very concept of epiphenomenalism is exactly just such a belief—which proves that our beliefs do have causal impacts on reality.

But since the physicalist approach of denying their existence utterly fails, and since the physicalist approach of calling them “identical to” the blind causal dispositions of some assembly of neurons also fails, there is no option left which is both (1) internally consistent, (2) accounts for all of the facts that any valid theory must account for, and (3) remains “physicalist” in any meaningful sense. The only way the physicalist can give causal efficacy to our consciously experienced beliefs is to say that they literally just are a certain set of brain events. But, as physicalists themselves (like Rosenberg) acknowledge, this would mean we have to eliminate from the picture everything that makes our thoughts and experiences what they actually are. And that is why some physicalists end up desperate enough to turn to a theory as blind and idiotic as eliminativism: eliminativism is, in fact, the end conclusion of the physicalist premises.

But it is also blatantly absurd. And not absurd like “Hey, did you know the ground beneath you is actually spinning through space really fast even though it feels solid and motionless and stable?”

Absurd like “Hey, did you know that colorless green ideas sleep furiously? This is not a sentence. You are not reading this. In fact, nobody ever reads anything at all.”

Hence, the very fact that our beliefs about free will and determinism—no matter what they are—have the capacity to impact our behavior actually turns out to be an inescapable refutation of the very physicalism which underlies the claim that determinism is the only option because free will isn’t possible within a physicalist universe (as, indeed, it wouldn’t be, if physicalism were true). And that leaves us with all the weight of direct subjective experience itself in favor of human possession of free will on the one side, and nothing on the other.

 _______ ~.::[༒]::.~ _______

My concluding comments will require a little more allowance of liberty from the reader than usual, as I will turn now from making logical arguments to explaining something about my own personal view—and so the standard to which my reasoning should be held from here is no longer “can I prove it?” but “does this internally hold together?”

I have argued elsewhere on this blog for the relevance of biological factors in predicting human behavior (for example, near the ending of this essay on the relationship between poverty, race, out–of–wedlock birth, and crime). Doesn’t that leave me with some explaining to do? How can there be free will and proof of genetic influence?

Actually, my view is the only one that can account for the meaningfulness of an idea like the insanity defense. Why is it that “insanity” should reduce a person’s punishment for a crime? What possible rationale is there for that?

In his own attempt to defend this principle, Sam Harris writes:

What does it really mean to take responsibility for an action? For instance, yesterday I went to the market; as it turns out, I was fully clothed, did not steal anything, and did not buy anchovies. To say that I was responsible for my behavior is simply to say that what I did was sufficiently in keeping with my thoughts, intentions, beliefs, and desires to be considered an extension of them. If, on the other hand, I had found myself standing in the market naked, intent upon stealing as many tins of anchovies as I could carry, this behavior would be totally out of character; I would feel that I was not in my right mind, or that I was otherwise not responsible for my actions.

I think most people would say that Harris is just plain wrong about whether the mere fact that behavior is “out of character” means that we do, or even should, judge that a person is therefore “not responsible for (their) actions.” The first time anyone commits a violent act of rape or murder, for example, their behavior is by definition “out of character”. Yet, this fact alone most certainly does not cause us to morally excuse all first–time offenders—nor should it.

The implicit idea behind the insanity defense is that there are some conditions in which a person has less control over their impulses than others, and is therefore less morally culpable for their actions. But if determinism were true, then the insanity defense would make no sense, because none of us would ever have any “control” over any of our impulses. Thus, all of us would qualify in the relevant sense as “insane”, all of the time—and the concept would never add any particular new meaning to any particular case; nothing would ever make this extra true in some peculiar circumstance, because it would already be as true as it can ever be, for everyone, all of the time. Hence, only if free will does exist can we contemplate situations in which it could be overridden, or reduced by varying degrees. “My brain made me do it” cannot be an exculpatory claim for the determinist—but it can for the believer in free will (if and when other facts support it).

In any case, my own view of free will in the relationship between the mind and the brain—simplified—goes something like this:

• (A) The conscious mind has the metaphysical capacity to choose between, and to inhibit, brain–based impulses (but exercising this capacity requires expenditure of a certain kind of probably limited “energy”).

• (B) Most of the time, the conscious mind is “in the driver’s seat”—but there are probably some unique circumstances in which it actually can get thrown out of that seat, thus rendering the driver proportionally less morally responsible for where the car ends up going in such unique cases.

• (C) Our biology essentially determines the impulses which we experience, and then possess the capacity to choose between, in the first place.

• (D) Empirical science has revealed that genetics plays a substantial role, far larger than most environmental inputs, in hardwiring the biology which in turn determines those impulses.

• (E) As a contingent fact, it is true that people usually decide to act on their impulses. But those impulses do not absolutely determine their ensuing actions most of the time.

The picture we get is one where the conscious mind is highly analogous to the “driver” of a vehicle, yes—but the vehicle is more like a boat than a car, and the fact that someone is holding the wheel doesn’t mean he possesses the power to drive the boat absolutely anywhere, at any time, without external constraints. On the contrary, whether the driver or the waves of the ocean are more influential in determining where the boat will go at any given point in time depends on various weather conditions and other circumstances which, themselves, are outside of the driver’s absolute control.

But barring more severe kinds of circumstances, someone who drives the boat well could thereby navigate to a part of the ocean where the waves will exert relatively less influence, and his driving skills therefore relatively more influence, over where he goes next.

And it has been increasingly validated by empirical science that belief in free will can help us to drive better—to the point that implicitly prompting someone to disbelieve in free will is even known to lower their reaction time. On the assumption that determinism is true, how is the determinist supposed to explain this? The proponent of free will can explain it easily: reminding someone that they have free will can prompt them to use it more, in just the same way that someone who has given up trying to drive a boat they can’t seem to maintain control of can benefit from a motivational speech reminding them of the fact that they can still get out of the storm that they’re in if they grab back onto the wheel and keep focusing their attention—because there is in fact a “driver” there who either may exercise that capacity, or may not.

And this is true even if at other times the implication that their driving was solely responsible for getting them into the storm in the first place can be further frustrating to them, when that implication happens to be false. But the problem in those cases is that it wasn’t the case—not that it couldn’t have been, or never is at all. Indeed, a neuroscientist who happens to be a dualist has had more success treating OCD than anyone so far operating under a materialist paradigm through methods that ask them to practice focusing their subjective mental attention as a means of ultimately rewiring the impulses which they experience—and while the materialist will of course simply hand–wave this away because changes in subjective conscious attention are to them just “chunks of matter” being rearranged anyway, it remains the case that were that so, it would be impossible in principle for consciously experienced events as such to have any sort of independent causal potency over physical brain events altogether.

In sum: The scientific studies from Benjamin Libet and those who followed his footsteps do nothing to refute the possibility of metaphysical free will. If the determinist wants to argue that determinism has any sort of social or psychological benefit, he’s going to have to deal with the problem that no version of physicalism seems to be able to account for the possibility that beliefs, as such could have independent causal efficacy of their own over the physical states of our brains in the first place (without running into other, absolutely insurmountable problems that have been detailed elsewhere throughout this series). But it turns out that research is coming to establish that belief in free will has far more benefits than belief in determinism, anyway—and the idea that we should tell people that free will is impossible, or false, while telling them that they should believe in it anyway is an obvious dead end. It may “only” be the evidence of direct subjective experience that stands in favor of the existence of free will—but nothing solid stands against it.

 _______ ~.::[༒]::.~ _______

[1] In the Harris excerpt I read, a mention of the Soon studies followed the break after this paragraph. He may have been referring to the studies of Haggard and Eimer in this part which preceded the break, but in any case, Soon’s is one of the most recent modern “replications” of this kind of finding.

Calling for a Nazi / Social Justice Warrior Alliance

Imagine a world where the following paragraph was true:

White people are just 2% of the population of South Africa.

And yet, a whopping 31% of South African media companies are owned by white people; 38% are founded by white people; 45% of their presidents are white people; and 47% of their chairmen are white people. 26% of all the reporters, editors, and executives of the major print and broadcast media are white people. 75% of the senior administrators of the best South African colleges are white people, and from 11 to 27 percent of students admitted to those colleges are white. 139 of the top 400 richest people in South Africa are white. Of the top 100 political campaign funders, at least 42 of them are white. 15 out of 30 executives at the major think tanks that determine policy are white. To top it all, 8 of 11 senior advisers to President Zuma are white.

The corollary of these statements is that Blacks are around 98% of the population, and yet make up only 69% of media company owners, 62% of their founders, and 55% of their presidents … Only 25% of senior administrators at the best colleges are black; and only 3 of 11 Presidential advisers are black.

What would leftists’ response to this situation be?

The answer to that question is beyond doubt: they’d be outraged.

And it wouldn’t matter in the slightest that whites were a minority of the South African population—that would just make their domination of the country’s most important offices worse.

In the United States, we have a group calling itself the ‘Reflective Democracy Campaign’ which finds that white men are 31% of the population—but 66% of those who run for political office, and 65% of those elected. Once these figures are produced, no further investigation is required before leftists start asking why it is that “in the year 2015, there are roughly double the number of white men in elected office as there ought to be[?]”  Another campaign strives to draw awareness to the fact that white men make up 79% of elected prosecutors.

Or to give another example, when Spike Lee thought black winners at the Oscars were underrepresented compared to white winners, he called for a boycottIt turns out he was wrong: a USC study found that blacks, who are about 13% of the U.S. population, comprise 12.5% of actors in the top 100 films from 2007; 23 of 192 Oscar nominations (12%), and 9 out of 68 academy awards since 2000 (13.2%)—close to perfect statistical representation. But the mere idea that whites might be overrepresented in the Oscars compared to blacks was all it took to set off a loud and persistent conversation, with many people instantly prepared to believe that whites are overrepresented and that this is a problem in need of urgent address.

So in the case of the Oscars, the over–representation of whites compared to blacks was exactly zero. And in the case of the Reflective Democracy Campaign’s argument, whites are overrepresented amongst political candidates at just 1.4 times their population rate (whites are 63% of the population, and a combined 89% of Republican and Democratic candidates), and amongst elected prosecutors at 1.25 times their population rate.

So we can absolutely rest assured that if our opening paragraphs were true, liberals would be outraged to find whites overrepresented at 5–36 times their rate of the population rather than a mere 1.2.

So what makes liberals different from white supremacists—besides their target?

Everything stated in the opening paragraph of this post is, in fact trueabout Jews. 

Jews are just 2% of the United States population. And yet, they make up 18 out of 24 senior administrators of Ivy League colleges (75%), 8 out of 11 senior advisors to President Obama (72%), 8 out of 20 Senate Committee chairmen (40%), 33 out of 51 senior executives of the major Wall Street banks, trade exchanges, and regulatory agencies (64%), 23 out of 40 senior executives of the major Wall Street mutual funds, private equity funds, hedge funds, and brokerages (57%), 41 out of 65 senior executives of the major newspapers and news magazines (63%), 43 out of 67 senior executives of the major television and radio news networks (64%), 15 out of 30 senior executives of the major think tanks (50%).[1]

New students admitted to Harvard University? 25% Jewish. Yale? 27% Jewish. Cornell? 23% Jewish.

And when Jewish organizations reflect on Jewish representation in Ivy League colleges, they do so not to worry about whether Jews are pushing non–Jews out through their own overrepresentation, but to analyze the puzzle that “Thirteen percent of Princeton’s undergraduate student body is Jewish, the lowest percentage of any Ivy League university besides Dartmouth, which comes in at 11 percent.” Yet, both of these are still more than 4 and 5 times the Jewish percentage of the population.

The media? If we’re looking at the CEOs of media companies, then they’re 31% of the total. If we’re looking at founders, then they’re 38%. If we’re looking at presidents, then they’re 45%. If we’re looking at chairmen, then they’re 47%. If we’re talking about the directors and writers, then Jews represent “26 percent of the reporters, editors, and executives of the major print and broadcast media, 59 percent of the directors, writ­ers, and producers of the 50 top-grossing motion pictures from 1965 to 1982, and 58 percent of directors, writers, and producers in two or more primetime television series”.

These numbers range from over 12 to over 22 times the Jewish percentage of the population.

Banking? Of the five Federal Reserve board governors (Daniel K. Tarullo, Jerome H. Powell, Lael Brainard1, Stanley Fischer2, Janet L. Yellen3), three are Jewish. Of the nine executive officers of Goldman Sachs (Edith W. Cooper, Gregory K. Palm, John F. W. Rogers, Alan M. Cohen1, Harvey M. Schwartz2, Mark Schwartz3, Gary D. Cohn4, Lloyd C. Blankfein5, Michael S. Sherwood6), six are Jewish. Of the ten operating committee members of JP Morgan Chase (John L. Donnelly, Gordon A. Smith, Jamie Dimon, Mary Callahan Erdoes, Matthew E. Zames1, Daniel E. Pinto2, Douglas B. Petno3, Marianne Lake4, Stacey Friedman5, Ashley Bacon6), six are Jewish. Combining just these three major banks, 62% are Jewish—almost 30 times the Jewish population rate.

“ … the Jews run everything? Well, we do. The Jews run all the banks? Well, we do. The Jews run the media? Well, we do … It’s a fact; this is not in debate. It’s a statistical fact … Jews run most of the banks; Jews completely dominate the media; Jews are vastly disproportionately represented in all of these professions. That’s just a fact. It’s not anti-Semitic to point out statistics … It’s not anti-Semitic to point out that these things are true.” — Milo Yiannopoulos, The Rubin Report, March 2016

So how can leftists, who immediately take any statistical over–representation of whites in anything at all as a major social problem that needs to be changed—even at just 1.1 or 1.4 times the white population rate—condemn white supremacists for being worried about statistical over–representations several times larger than that? Indeed, how are the racialist left and white supremacists anything but two different sides of the same coin?

Amusingly enough, a large percentage of my audience will probably suspect me immediately of having gone full Nazi just because I went through the effort to pinpoint exactly how overrepresented Jews are at all. Now, that suspicion may be fair—but if so, why is it that going through the effort to pinpoint how overrepresented whites are in various fields or professions is not seen as bigotry in just exactly the same way?

As a matter of fact, the ‘Reflective Democracy Campaign’ itself has apparently failed to notice that it is not “whites” who are overrepresented within the legal profession—it’s Jews, who in fact make up 26% of the nation’s law professors, and 30% of Supreme Court law clerks. In Jews and the New American Scene, Seymour Lipset and Earl Raab point out that Jews make up “40 percent of partners in the leading law firms in New York and Washington.” So Jews are overrepresented in the legal profession at 13 or more times their population rate.

And if you subtract the 26% of lawyers who are Jewish from the 79% of prosecutors the RDC calls “white”, that leaves only 53% of prosecutors who are non–Jewish whites, compared to about 61% of the U.S. population that is non–Jewish white. So it turns out that ‘whites’ are not overrepresented at all—they’re under–represented at about 0.86 times the population rate. But what would happen to the RDC’s left–wing credentials if it were to openly admit this and call explicitly for a reduction of the Jewish percentage of elected prosecutors?

Indeed, what would happen to their public image in general once this was known?

Suddenly, they’d go from being a respectable campaign calling attention to a real social issue to being classed with Nazis and white supremacists—the lowest of the low—just because the demographic their numbers targeted happened to turn out to be Jews instead of whites. But why is it that this kind of campaign is valid just so long as it targets whites, and racist bigotry the moment it hits any other demographic?

Why are Jews statistically overrepresented? There are essentially two possibilities:

  1.  Jews could be acquiring positions of power and then using them to grant favors to other Jews—say, Jews could take over the senior administrative positions in Ivy League colleges (where they indeed compose about 75% of the total), and then they could favor admitting Jews as new students over others.
  2. Perhaps Jews are simply more intelligent, or industrious, or intellectual, or otherwise have temperaments more conducive to these arenas—and so they acquire their status in these positions through legitimate success.

The first of these options is the white supremacist answer: Jews aren’t any more intelligent than the rest of us; they’re just more nepotistic, networking with other Jews to take over the world. In order to avoid sounding like bigots, then, we’re supposed to give the second answer: Jews are simply more intelligent or more industrious or more intellectual, or simply have temperaments more conducive to these arenas.

But if we’re talking about whites instead of Jews, then suddenly the first option is exactly what social justice warriors demand that we say: ‘whites aren’t any more intelligent than anyone else; they’re just more nepotistic’! Meanwhile, the second option is suddenly the one that is now inexcusably, irredeemably racist: if you claim that whites are simply more intelligent or more industrious or more intellectual, you’re a bigot.

What the ‘politically correct’ view requires us to say about Jews is exactly what it calls bigotry if we say it about whites. And what it requires us to say about whites is exactly what it calls bigotry if we say it about Jews. The disproportionate success of whites is purely the result of unjust ‘privilege’, and you’re a bigot if you think it has anything to do with greater merit. But the disproportionate success of Jews is the result of greater merit, and you’re a bigot if you try to diminish that by attributing it to ‘privilege’, much less want it to change!

The egregiousness of the naked double standard here is overwhelming. As far as resolving it, it would seem we have exactly two possible options: either we grant the argument in both cases, and encourage the social justice warriors and white supremacists to join forces against their new common foe—or else we deny it in both cases.

So which is it?

The ‘Poverty’ of Sociology

It’s obvious that there is, in general, a geographical correlation between poverty and crime. What I mean by that is that if we look at a map of the United States (or the world—but this post will focus on the United States) at any given point in time, in places where we see lots of poverty, we will also see lots of crime. 

This much is beyond serious question.

What is under–appreciated, however, is just how complicated it is to actually explain why. The obviousness of the geographical correlation between poverty and crime has led many to assume that it must be just as obvious that poverty “causes” crime. On the other hand, many social conservatives have argued that poverty and crime correlate with each other only because divorce and out–of–wedlock birth produce them both: according to this argument, single–parent families produce poverty because they earn less income than two–parent families do; and they produce crime because boys raised by single mothers have no models of masculinity to learn from and emulate, and therefore become more likely to attempt to express their masculinity through violence and affiliation with gangs.

Disentangling cause and effect in these relationships is more difficult than the proponents of either theory often assume—for even correlations that seem obvious at first glance can turn out to have causes that no one even considered. As we will see, both the “poverty causes crime” advocates and the “single parenthood causes both poverty and crime” advocates are, for the most part, wrong (though each is also about 1% correct).

A cautionary tale

By the mid–1990’s, hormone replacement therapy had become one of the most widely prescribed medications for women in North America. Books were published touting the benefits of synthetic hormones injections with titles like “Feminine Forever!” Several large studies (Stampfer 1991) found that even after controlling for other risk factors like age, “estrogen use is associated with a reduction in the incidence of coronary heart disease as well as in mortality from cardiovascular disease”. Another meta–analysis (Grady 1992) found a 35% reduction in heart disease amongst those using synthetic estrogen and concluded that “hormone therapy should probably be recommended for women … with coronary heart disease or at high risk for coronary heart disease.”

Yet, by the late 1990’s and early 2000’s, this consensus had fallen apart completely. Not only did it turn out not to be the case that hormone replacement therapy was beneficial for women with or at risk of heart disease (Rossouw et al. 2002), in many cases it actually turned out to increase the risk of heart disease (Hulley et al. 1998).

What happened?

Was the earlier research falsified? No.

The correlation between use of estrogen and lower heart disease risk found by earlier research did, in fact, exist.

It just didn’t get there because the use of estrogen causes a reduction of heart disease risk. It simply turned out to be the case that, on average, the women who were trying hormone replacement therapy were women of higher socioeconomic status who also tended to keep healthier diet, lifestyle, and exercise habits. Thus, the use of estrogen was increasing the risk of heart disease all along, despite the fact that it was true that women trying hormone replacement therapy did have lower heart disease rates on average than those who weren’t.

What we have here is an excellent example of a “hidden variable” explanation for a correlation. The original assumption behind the correlation between hormone replacement therapy (HRT) and lowered heart disease risk (–CHD risk) was that HRT caused –CHD risk. And this false assumption likely contributed to some uncertain number of unnecessary deaths. The real answer turned out to be that some other, previously unidentified factor (socioeconomic status increasing likelihood of both continued use of HRT and better lifestyle habits), was causing both.


To use a more commonplace example of a faulty inferences of correlation from causation, it is obviously true that we find fire burning only in the presence of oxygen. Wherever we see fire, then, we are bound to find oxygen. But this doesn’t make oxygen “the cause” of fire burning—indeed, since there are so many places where we can find oxygen but no fire, it is obvious that something else must be “the cause”.

Similarly, we are bound to find more human trafficking in places where there are more women who are vulnerable to being captured and exploited. But if anyone were to suggest that contact with a vulnerable woman is the literal “cause” of a man’s decision to kidnap and traffic her into sex slavery, the very same liberals who ask us to excuse crime while addressing its “root causes” would condemn this as “victim blaming” of the most horrendous and disgusting form. Yet, this seems like an arbitrary attempt to have one’s cake and eat it too—for the sake of consistency, we must either accept that human beings have (at least some degree of) free will, or else we must deny that and grant that all human behavior is the deterministic result of external circumstances across the board.

When analyzing these kinds of questions, we shouldn’t lose sight of just how banal much violent crime is.

Most aggressive crime just doesn’t look anything like a struggling family pocketing a loaf of bread after spending the rest of the grocery budget to feed their children. It looks like 39–year–old Ronald McNeil murdering a 19–year–old female college freshman, because of a fight with a different party attendant over the rules of beer pong. It looks like public gang rapes outside of rap concerts. It looks like randomly setting a 19–year–old girl on the way to get dinner on fire. But let’s not spend too much time on anecdotes before we move on to data.

Crime and poverty: does one cause the other?

First of all, it is absolutely, undeniably true that crime does help to cause poverty.

“A high crime rate will drive businesses out of a neighborhood. This eliminates both availability of products and services and a source of jobs. Further, those who do stay find it necessary to charge higher prices to offset losses due to thievery and higher costs of both security measures and insurance premiums—if insurance is available at all.

Property values are driven down by a smaller demand because of the greater difficulty potential purchasers have in obtaining mortgage loans.

The loss of productive activity by those who live by preying on others reduces the output of the area in which they live. Thus, crime injures economically both direct victims and others in the crime-ridden neighborhood.”

A more recent study calculated only the direct losses of victims; the price spent on police, prisons, and lawyers; and the opportunity costs for the perpetrator himself. It found that the average cost of each act of robbery is around $42,000; of each act of assault, more than $100,000; and of each act of murder, almost $9,000,000.

These estimates come without looking at the damage done to a community’s economy through crime’s impacts on third parties other than the perpetrator and his victim (the flight of businesses and thus opportunities away from high–crime areas, the raised price of insurance, the loss of property values, and so forth), and so they undoubtedly underestimate the true amount of damage caused by crime.

What about poverty causing crime? It is true that poverty and crime correlate geographically: in locations where we find more poverty, we are also going to find more crime. But it turns out that poverty and crime do not correlate very well historically: when poverty rises, we do not see concurrent rises in crime.

Both before and after the Great Depression, the relationship between poverty and crime actually appears to have inverted: “Most evidence suggests that the crime rate rose after World War I and the 1920s and that crime rates dropped as the nation sank into the Depression and continued to decline into the 1940s.” Eli Lehrer adds extra detail: “Crime rates fell about one third between 1934 and 1938 while the nation was struggling to emerge from the Great Depression and weathering another severe economic downturn in 1937 and 1938. Surely, if the economic theory held, crime should have been soaring.”

And as he continues, he explains that this same inverted relationship was also found during several other recessions over the past century, as well: “Crime rates rose every year between 1955 and 1972, even as the U.S. economy surged, with only a brief, mild recession in the early 1960s. By the time criminals took a breather in the early 1970s, crime rates had increased over 140 percent. Murder rates had risen about 70 percent, rapes more than doubled, and auto theft nearly tripled. … Crime rates fell in nearly all categories between 1982 and 1984, even though … wages fell for low-income workers during the same period. Likewise … wages rose for low-income workers between 1988 and 1990, despite being a period of higher crime rates. In fact, some of the worst years for crime increases were in the late 1950s, as hourly wages surged ahead. Between 1957 and 1958, for example, per–capita income increased about 8 percent while crime rose nearly 15 percent.”

Patrick F. Fagan adds: “What is true of the general population is also true of black Americans. For example, between 1950 and 1974 black income in Philadelphia almost doubled, and homicides more than doubled.” Similarly, poverty rates between different ethnic groups fail to explain their different crime rates today: in the 2006 American Community Survey, 21.5% of Hispanics lived in poor households and 37.2% of Hispanic men age 18–24 had not completed high school in 2005—compared with 25.3% of blacks and 26.3% of black men: in other words, 3.8% fewer Hispanics lived in poor households, but 10.9% more Hispanic men failed to graduate high school. If poverty were causing violent crime, then we would expect the violent crime rate to be similar amongst blacks and Hispanics. But that isn’t what we find; instead, the Hispanic crime rate is only slightly higher than the white crime rate, both of which are far lower than the black crime rate—even after controlling for age groups to account for the different proportions of young adult males (who commit the vast majority of crime) in each ethnic group.


from — for a lot more detailed charts and graphs on this topic, Random Critical Analysis has done plenty of heavy work in the Nov. 2015 post, “Racial differences in homicide rates are poorly explained by economics.”


And what holds historically about the association between poverty and crime continues to hold into the present day, with the discovery that the “Great Recession” of 2007–2009 came with a reduction in crime, too.

Writing in the Wall Street Journal, James Q. Wilson explained: “As the national unemployment rate doubled from around 5% to nearly 10%, the property-crime rate, far from spiking, fell significantly. For 2009, the Federal Bureau of Investigation reported an 8% drop in the nationwide robbery rate and a 17% reduction in the auto-theft rate from the previous year. Big-city reports show the same thing. Between 2008 and 2010, New York City experienced a 4% decline in the robbery rate and a 10% fall in the burglary rate. Boston, Chicago and Los Angeles witnessed similar declines. … In 2008, … even as crime was falling, only about half of men aged 16 to 24 (who are disproportionately likely to commit crimes) were in the labor force, down from over two-thirds in 1988, and a comparable decline took place among African-American men (who are also disproportionately likely to commit crimes).”

Heather MacDonald supplies additional data: “[B]y the end of 2009, the purported association between economic hardship and crime was in shambles. According to the FBI’s Uniform Crime Reports, homicide dropped 10% nationwide in the first six months of 2009; violent crime dropped 4.4% and property crime dropped 6.1%. Car thefts are down nearly 19%. The crime plunge is sharpest in many areas that have been hit the hardest by the housing collapse. Unemployment in California is 12.3%, but homicides in Los Angeles County, the Los Angeles Times reported recently, dropped 25% over the course of 2009. Car thefts there are down nearly 20%.”

Okay, so what if all of these hard statistical measures of the economy are too crude to capture what really matters for someone’s likelihood to commit a crime—how they perceive the economy as doing, regardless of the facts? Well, that brings us back to James Q. Wilson: “the University of Michigan’s Consumer Sentiment Index offers another way to assess the link between the economy and crime. This measure rests on thousands of interviews asking people how their financial situations have changed over the last year, how they think the economy will do during the next year, and about their plans for buying durable goods. The index measures the way people feel, rather than the objective conditions they face. It has proved to be a very good predictor of stock-market behavior and, for a while, of the crime rate, which tended to climb when people lost confidence. When the index collapsed in 2009 and 2010, the stock market predictably went down with it—but this time, the crime rate went down, too.”

Steven D. Levitt’s Understanding Why Crime Fell in the 1990s: Four Factors that Explain the Decline and Six that Do Not summarizes the research: “Empirical estimates of the impact of macroeconomic variables on crime have been generally consistent across studies: Freeman (1995) surveys earlier research, and more recent studies include Machin and Meghir (2000), Gould, Weinberg and Mustard (1997), Donohue and Levitt (2001) and Raphael and Winter-Ebmer (2001). Controlling for other factors, almost all of these studies report a statistically signiŽficant but substantively small relationship between unemployment rates and property crime. A typical estimate would be that a one percentage point increase in the unemployment rate is associated with a one percent increase in property crime.”

He concludes: “Based on these estimates, the observed 2 percentage point decline in the U.S unemployment rate between 1991 and 2001 can explain an estimated 2 percent decline in property crime (out of an observed drop of almost 30 percent)….” But yet again, even here, the direction of causation isn’t clear. Levitt misspeaks when he says the conclusion is warranted by this evidence that the decline in the unemployment rate “can explain” the decline in property crime; because the word “explain” invokes causation, and what this data shows us still isn’t causation yet.

How do we know it’s the unemployment rate that “explains” the decline in property crime? How do we know it isn’t the decline in property crime that explains the decline in the unemployment rate? If someone decides not to commit a home robbery, he obviously has a much better chance of finding a job in the near future than if he does. And in all likelihood, a business in a town with fewer property crimes is making more sales and therefore able to employ more people; more people are considering starting businesses; and more established businesses are considering moving in. At the very least, this effect must contribute to the correlation; and that means that a 1% decline in the unemployment rate must cause somewhat less than a 1% decline in the property crime rate.

So even with property crime, fluctuations in the economy don’t explain much at all. But as Levitt continues the summary, he explains: “Violent crime does not vary systematically with the unemployment rate.” What if the unemployment rate isn’t the right measurement of the economy? “Studies that have used other measures of macroeconomic performance like wages of low-income workers come to similar conclusions (Machin and Meghir, 2000; 170 Journal of Economic Perspectives Gould, Weinberg and Mustard, 1997).” Now, more astute readers may wonder why, if the hypothesis in the last paragraph about property crimes causing unemployment were plausible, violent crimes wouldn’t have the same effect. A possible answer is that generally speaking, far more of the kinds of people who would contemplate committing property crimes are potentially employable to begin with, whereas comparatively far more of the kinds of people who would contemplate committing violent rapes or murders already exhibit a demeanor or engage in other behaviors that make them less employable anyway.

Yet another point that demolishes the left–wing narrative: white–collar crime.

Isn’t it the left telling us that it’s the rich who are causing all of the real problems in the world in the first place? Aren’t they the ones telling us that it’s the rich white men running the world who are destroying the environment, lying to the public, committing embezzlement and collusion and fraud, fighting for policies that hurt the poor, and invading foreign countries to kill thousands of innocent people for no reason other than selfish gain?

Doesn’t that, in and of itself, contradict the notion that poverty “causes” crime?

Doesn’t that, in and of itself, prove that even liberals don’t actually believe that raising everyone’s economic welfare is all it takes to put an end to anti–social behavior and make people be nice to each other?

White–collar crime is interesting because of the way that it exposes the contradictory hole in the center of this set of beliefs, but it is also interesting for another reason, as well: it shows, once again, that whatever makes different demographics commit crimes at different rates, poverty isn’t a good explanation—because the disparities in crime that exist on the street actually turns out to exist in corporate offices as well.

Obviously, white people do commit the majority of white–collar crimes; and the harm that can come from these acts shouldn’t be understated. The savings and loans scandal of the 1980s was almost exclusively committed by white people, and cost U.S. taxpayers over $470 billion—more than all the conventional bank robberies in U.S. history combined. White people have “disproportionately” achieved positions of economic power and influence, and the damage that people can do in these positions substantially outweighs what any number of street criminals are capable of. However, what the data reveals is that white people in these positions are nonetheless proportionally underrepresented amongst white–collar criminals—in other words, whites are a percentage larger than their share of the population of those in corporate positions, but they still commit a percentage less than their share of the “corporate population” of all percentage of white–collar crimes. While ~99% of anti–trust and security fraud offenses are committed by whites because they’re effectively the only ones in positions to, non–whites are nonetheless found to be overrepresented in all the other corporate crimes they are in positions to commit.

These findings led Hirschi and Gottfredson to conclude in The Causes of White–Collar Crime that “When opportunity is taken into account, demographic differences in white collar crime are the same as demographic differences in ordinary crime.” But they weren’t, of course, referring solely to race: men are disproportionately likely to commit white–collar crimes relative to women, even once opportunity is taken into account, as well. In fact, men were found to be even more disproportionately overrepresented in white collar crime than they are in street crime. Likewise, the commission of white–collar crimes peaks around age 20, and falls in half by around the age of 40, and once again, this exactly fits the pattern of all other crimes. Whatever it is that causes men to commit more crimes than women, whatever it is that causes the young to commit more crimes than the old, and whatever it is that causes some ethnic groups to commit more crimes than others, it doesn’t look like poverty can be the explanation.

In 2014 came the final nail in the coffin to the “poverty causes crime” thesis. A Swedish study conducted by Amir Sariaslan was published which—for the first time—tested directly whether growing up in poverty directly contributes to crime, or whether there are other factors about the kinds of families which tend to end up poor which also cause them to breed crime. What made Sariaslan’s study uniquely insightful was the decision to take families which rose out of poverty, and compare the lives of children born and raised within those families before their rise from poverty with the lives of children born and raised within those same families after their rise from poverty.

The conclusion his research came to? “There were no associations between childhood family income and subsequent violent criminality and substance misuse once we had adjusted for unobserved familial risk factors.” Sariaslan’s study, in other words, had proven that growing up in poverty is not what creates one’s adult likelihood of committing violent crimes. Children who grow up in previously–poor families have exactly the same likelihood of committing crimes as children who actually grow up poor. The only conclusion we can soundly come to is that something else about poor families other than poverty itself must explain why their children go on to commit crimes.

Many conservatives think the root of social dysfunction is a lack of monogamy. 

Criminologist Anthony Walsh writes in Race and Crime: A Biosocial Analysis, for example:

“If racism were the culprit behind the difference in poverty rates, we would expect black families, regardless of their household composition, to be worse off than white families, regardless of their household composition. But this is not what we observe. The U.S. Census Bureau’s (McKinnon & Humes, 2000) breakdown of family types by race and income showed that non-Hispanic white single-parent households were more than twice as likely as black two-parent households to have an annual income of less than $25,000 (46% versus 20.8%). To state it in reverse, a black two-parent family is less than half as likely to be poor as a white single parent family. These figures constitute powerful evidence against the thesis that black poverty is the result of white racism, as well as powerful evidence that high rates of single-parenting is a major cause of family poverty for all racial/ethnic groups. The prevalence of single-parent families is so high in the black community that: “[A] majority of black children are now virtually assured of growing up in poverty, in large part because of their family status” (Ellwood & Crane, 1990:81).”

However, a study by Sara McLanahan found that “The dropout risk is 37 percent for those with never-married mothers and 31 percent for those with divorced parents, in contrast with the 13 percent risk of those from families with no disruption. Significantly, the risk for children who lost a parent to death is 15 percent—virtually the same as that for children from intact homes. Clearly, children of a widowed mother enjoy economic and other advantages over their peers from households headed by divorced or never-married parents.”

Emphasis mine. What are these “other” advantages?

The only truly plausible candidate for an answer is genes.

Commenting on these findings, Razib Khan (graduate student in genomics at UC Davis) writes:

“The null hypothesis which the media and the public intellectual complex sell us is that destabilized households lead to late life destabilization in individuals. What this misses is that destabilized individuals lead to destabilized households, and destabilized individuals also produce other destabilized individuals. In other words, one reason that kids whose parents didn’t stay together and are messed up is because they have the same crappy dispositions as their parents. They share genes with their parents.

This isn’t to deny that all things equal being in an intact nuclear family is preferable to being raised by a single parent. Ask anyone who grew up in a situation where they lost one of their parents to cancer or some such thing. But naive assumptions that simply increasing the marriage rate will reverse social dysfunction are going to be dashed against the reality that putting together explosive impulsive people under the same roof is not going to turn them into Ward and June Cleaver.

If behavioral genetics or the idea of heritability is new to you, one of the best introductions to the basics can be found in Brian Boutwell’s article at Quillette, “Why parenting may not matter and why most social science research is probably wrong”; as well as the follow–up, “Heritability, and Why Parents (But Not Parenting) Matter”. The twin studies, adoption studies, and family studies that these conclusions are based on have been challenged for years, and they have stood up to all of these challenges remarkably well. One problem with any attempt to critique their validity is the odd fact that they all tend to converge on the same exact estimates of how heritable various traits are: if all of them are flawed in different ways, how is it that they all consistently land on the same results? It’s like when young earth creationists critique the validity of carbon dating—do you really think it’s just sheer coincidence that carbon dating and helioseismic dating converge on exactly the same estimates for the Earth’s age? I’ll be addressing more general background on twin, adoption, and family studies as well as the critiques that have been made of them in the future. For now, I’m going to take their validity for granted and simply discuss what the research has shown.

In men, studies find that anywhere between 40% to 60% of the likelihood of divorce is the result of “genetic factors affecting personality.” More generally, a person’s “sociosexual orientation” is clearly found to be very highly heritable. Individuals are classified on this scale as either “sociosexually restricted” or “sociosexually unrestricted”. An ordinary person might simply call them “chaste” or “promiscuous”: so–called “unrestricted” individuals are more likely to engage in sex earlier in relationships, engage in sex with more than one partner at a time, seek sex for its own sake, and engage in it in relationships involving less love, dependency, and commitment.

Twin, adoption, and family studies are able to separate the role of heredity, “shared environment” (which essentially means “parenting”), and “non–shared environment” (which essentially means everything else) in the development of various behavioral and personality traits. The conservative argument about monogamy is severely damaged not just by the fact that divorce and sociosexuality have such a large genetic component, but by the fact that all indications so far reveal almost zero effect on these traits from one’s parenting, even once the influence of genes is taken out of the picture: what’s left over after genes are accounted for falls almost entirely into “non–shared environment”—a category which roughly means “we don’t know what it is, but it isn’t genes or parenting.”

As one of the studies quoted in the last paragraph states in its conclusion, “Consistent with genetic theory, familial resemblance [in sociosexuality] appeared primarily due to additive genetic rather than shared environmental factors.”

Shared environmental factors: that means parenting.

Another study compared children who experience their biological parents’ divorce with children who experience their adoptive parents’ divorce, and found that “adopted children who experienced their (adoptive) parents’ divorces exhibited elevated levels of behavioral problems and substance use compared with adoptees whose parents did not separate, but there were no differences on achievement and social competence.” While some behavioral problems (but not others) do result from experiencing one’s adoptive parents’ divorce, it isn’t the experience of divorce (or growing up in a single parent family) that molds a child’s core personality. The illusion that this is so happens because in most cases, a child both undergoes the experience of divorce and inherits his genes from the divorcing parents. But this illusion becomes untangled when adoptive children experience their adoptive parents’ divorce—some short term behavioral problems result, but not others; and most importantly, these behavioral changes do not appear to last the same way that they do in ordinary cases where a child undergoes a biological parents’ divorce.

Yet another study found that once the criminal behavior of single parents was actually controlled for, the association between single parent families and crime disappeared entirely. So the offspring of single parents are more criminal because their parents tend to be criminal.  And clearly, if being raised by one criminal parent produces poor outcomes for children, then being raised by two of them can’t be much better.

So the evidence suggests that the correlation between poverty and crime is taken care of by “unobserved familial risk factors”—but it also establishes very clearly that, in general, the individuals within families are similar in the ways that they are in large part because of their shared genes, and very specifically not because of their shared upbringing. And it proves this in the specific cases even of divorce and sociosexuality. Thus, poverty and crime can’t correlate with each other because each is the causal result of broken homes. Poverty, crime, and out–of–wedlock birth therefore must correlate with each other to the extent that they do because all three are the result of other things that tend to cause all three. But the only causes consistently found so far are genetic—and most of what isn’t genetic, as far as we’re able to tell, is simply random (again, for more, see Jayman’s Blog).

Of course, the correlation between single–parent families and crime actually has become weak, though it may have appeared stronger when the theory first originated. While it’s true that both crime and single parenthood rose together from around 1960 to 1990, this relationship decoupled during the massive crime decline of the 1990s—when crime fell tremendously even as single parenthood continued its decades–long gradual rise.



Now, many liberal commentators (like biological anthropologist Greg Laden) were too quick to pick up on the above chart as proof that there is no relationship between single parenthood and violent crime. To see why more evidence is needed before we can reach that conclusion, picture a chart with an x–axis titled “how long my stopped faucet has been running” that starts at 12:00pm and ends at 1:00pm, and a y–axis titled “how much water is spraying out towards my floor”—measured by quantifying the amount of water actually landing on my floor. My faucet stays on for the full duration of the whole hour, but around 12:30pm the relationship stops being linear because, suddenly, the amount of water on my floor decreases. Does this chart refute the notion that, all else equal, keeping my stopped up faucet running increases the amount of water spraying towards my floor?

Of course not. If the relationship decouples, we can’t immediately assume that keeping the faucet on was never increasing the amount of water spraying towards my floor. Maybe what happened is that around 12:30pm, I became more diligent at stopping it on its way towards my floor before it actually got there—say, because I put down buckets and I mopped up the floor with towels. If changes in how we tackle violent crime once it is already in existence took place during the 90’s, then perhaps we just became more efficient at fighting the crime that single parenthood was helping create. And in fact, something like this did happen: in 1972, only 158 out of 100,000 people were in prison or jail; by 1991, that doubled to about 311 out of 100,000 people.

So perhaps what this chart proves is only that the criminal justice system, by becoming more aggressive, also became much more effective at reducing crime that was being produced by, amongst other things, single parent families. Liberal commentators like Greg Laden are being profoundly dishonest when they wag their fingers at the other aisle without first considering the possibility. Unlike the relationship between poverty and crime, there is at least a long stretch of time during which the two variables rise together. And unlike the relationship between poverty and crime, we don’t repeatedly see shifts in which poverty goes down (or up) and yet crime goes up (or down).

However, the best controlled analysis shows that, just like the relationship between poverty and crime, there is only a tiny relationship left over once other factors have been controlled for. A 2009 meta–analysis of previous meta–analyses which looked at individuals actually found that less than 1% of the population’s variation in criminality could be explained by family structure (although studies which looked at different world regions did find much higher geographic correlations between single parenthood in crime—in other words, in places where we find lots of single parents, we’ll also find lots of crime. The fact that we find a strong ‘geographic’ correlation combined with a poor ‘historical’ correlation supports the claim that what correlation does exist exists only because of some other “hidden variables” that tend to come together, but don’t come together necessarily).

Single–parenthood is proposed as an explanation of criminality, first and foremost, because rates of both single–parenthood and criminal behavior are higher in black populations. Yet, it is clear now that changes in single–parenthood rates do not actually correlate well with changes in rates of crime. So it turns out that high rates of single parenthood in black communities can’t explain why crime rates tend to be higher in these areas, either.

Surprisingly, of the poorest ten counties in the United States, none contain black majorities. Most of these are either Indian reservations (like Ziebach County, South Dakota, which is ~72% Native American with ~62% of the population in poverty), or Appalachian states with large white majorities (like Owsley County, Kentucky, ~99% white with an annual median household income under $22,000). Despite the poverty rates across this second group of poor counties, however: “There’s a great deal of drug use, welfare fraud, and the like, but the overall crime rate throughout Appalachia is about two-thirds the national average, and the rate of violent crime is half the national average, according to the National Criminal Justice Reference Service.”

However, the population density of Owsley County is just 24 people per square mile.

In contrast, Chicago has a population density of over 11,000 people per square mile.

In his “Reflections on the Politics of Crime”, Tim Wise emphasizes a few citations which suggest that “concentrated poverty” (high population density in poor neighborhoods) is the real key to the link between poverty and violence: most poor whites live in places that are less poor overall than the places most poor blacks live (in other words, they live closer to more wealthier people); most poor blacks live close to many other poor blacks.

But why should living closer to other poor people increase the likelihood of a poor person committing a violent crime? On the face of it, this seems like a rather ad hoc attempt at explanation: had we found that living in richer areas increases violent crime amongst the poor, it would have seemed just as natural to suppose that living in proximity to richer people both increases the relative indignity of being poor while surrounded by wealth, and increases the opportunities those poor persons have to commit crimes ‘worth’ committing. 

In Crime & Human NatureWilson and Hernnstein discuss a community that had high poverty and high population density and faced large amounts of racial discrimination, without concurrent high crime rates: “During the 1960s, one neighborhood in San Francisco had the lowest income, the highest unemployment rate, the highest proportion of families with incomes under $4,000 per year, the least educational attainment, the highest tuberculosis rate, and the highest proportion of substandard housing of any area of the city. That neighborhood was called Chinatown. Yet in 1965, there were only five persons of Chinese ancestry committed to prison in the entire state of California.

The low rates of crime among Orientals living in the United States was once a frequent topic of social science investigation. The theme of many of the reports that emerged was that crime rates were low not in spite of ghetto life but because of it. Though Orientals were the object of racist opinion and legislation, they were thought to have low crime rates because they lived in cohesive, isolated communities. The Chinese were for many years denied access to the public schools of California, not allowed to testify against whites in trials, and made the object of discriminatory taxation. The Japanese faced not only these barriers but in addition were “relocated” from their homes during World War II and sent to camps in the desert on the suspicion that some of them might have become spies or saboteurs.

There was crime enough in the nineteenth– and early–twentieth–century Oriental communities of California, but not in proportion to the Oriental fraction of the whole population. The arrest rate of Chinese and Japanese was higher in San Francisco than in any other California city during the 1920s, but even so Orientals were underrepresented by a factor of two, the Japanese more so than the Chinese. … What is striking is that the argument used by social scientists to explain low crime rates among Orientals—namely, being separate from the larger society—has been the same as the argument used to explain high rates among blacks. The experience of the Chinese and Japanese suggests that social isolation, substandard living conditions, and general poverty are not invariably associated with high rates of crime among racially distinct groups.”

So Tim Wise’s explanation really is deeply ad hoc and therefore fails, as well. Why would concentrated poverty lead to higher crime rates amongst blacks, but not amongst Asians? The answer must lie somewhere else.

In any case, there’s a detour worth taking here. One of Wise’s key citations is an essay by Johnson and Chanhatasilpa in Darnell Hawkins’ 2003 anthology, Violent Crime: Assessing Race and Ethnic Differences, and the mechanism by which they propose that concentrated poverty leads to crime is interesting. Pay close attention.

They open with a summary of previous research: “A community that shows collective and reciprocal willingness to combat crime and disorder (“you watch my back and I’ll watch yours”) will be far less likely than its spatial counterparts to experience crime …. social networks are the foundation of informal controls because they facilitate collective action through networks of friendship and kinship ties….” They introduce and define the term “community control” as “the capacity of communities to wield social control”, and they state the hypothesis that “structural disadvantages [such as concentrated poverty] increase homicide rates in communities through their deleterious impact on community control….” So how do they measure “community control”? They create their measurement out of three different things: “(1) the percentage of owner occupied housing units; (2) the rate of residential stability; and (3) the percentage of children living in husband–wife households.” (p.96)

Hold on just a second. The irony here is actually hilarious.

Tim Wise is on the record as attacking the notion that out–of–wedlock births play any part in social dysfunction in the black community because, as he explains, the actual birth rate amongst unmarried black women has fallen—it’s just fallen faster amongst married black women. And that means the percentage of births out–of–wedlock has risen, even though the actual number of births out–of–wedlock hasn’t. He’s right, but here is why that is still actually an idiotic objection: if the black community is becoming increasingly dysfunctional, what that means is that there are a greater percentage of dysfunctional individuals within the black community than there were before. And if it were true that single–parent families produced dysfunction, then a higher percentage of births to single parents absolutely would explain why the black community today is more dysfunctional, whether the absolute numbers fell or not. A smaller but more dysfunctional black community would still be a more dysfunctional black community.

The claim that out–of–wedlock birth is responsible for crime is, as we’ve seen, generally (though not completely) false. Tim Wise may object to it on the basis of an absurd fallacy that can be dispensed with in a single paragraph; but he does object to it—and yet his key citation for the claim that “concentrated poverty” is the real cause of crime actually argues that it does so, in large part … by increasing the percentage of out–of–wedlock births.  Did he not read far enough to notice that, or did he decide not to mention it to his audience on purpose?

Well, that strikes down one out of three measurements Johnson and Chanhatasilpa used in their essay—and Tim Wise would even presumably agree with me that the correlation between out–of–wedlock birth and crime is insufficient to prove that the former is cause of the latter. Further, we have overwhelmingly good reasons from elsewhere (twin studies, adoption studies, comparison of the children of divorced parents with children whose adoptive parents divorce) to conclude that it isn’t. It should be clear enough that a correlation between residential stability or home ownership and crime raises just exactly the same kinds of issues. Criminals are likely to be bad residents, and not only are bad residents far more likely to get themselves thrown out of their apartments, but non–criminals are likely to want to move away from them as well. Both of these effects would contribute to low rates of “residential stability”. What evidence can they provide that the effect of residential instability causing crime is stronger than the effect of crime causing residential instability? So far as I can tell, they have none.

And that brings us back to the research of Amir Sariaslan. The 2014 study previously mentioned controlled for familial confounding in the association between childhood income levels and adult criminality and substance abuse, and found that the association disappeared completely. A 2013 study conducted by Sariaslan and a similar team did the same thing for neighborhood deprivation and young adult criminality and substance abuse. Once again, the team found that when they “adjusted for unobserved familial confounders, the effect was no longer present…. Similar results were observed for substance misuse. … the adverse effect of neighbourhood deprivation on adolescent violent criminality and substance misuse in Sweden was not consistent with a causal inference. Instead, our findings highlight the need to control for familial confounding in multilevel studies of criminality and substance misuse.”

In other words, criminal behavior runs in families. And the association between poverty in childhood or in neighborhoods and crime disappears completely once this is controlled for. The vast majority of research on these questions in social science has simply ignored this and failed to control for familial confounding entirely.

So is the problem with criminal families genes or parenting? The real answer, at last.

Just as we described earlier that twin studies, adoption studies, and family studies all support the idea that the risk of divorce, and promiscuity in general, are heavily influenced by genes but influenced almost not at all by parenting, so the same thing goes for criminality. Biological children of criminals adopted into non–criminal adoptive homes have approximately the risk of becoming criminals as children born to criminal parents in general do, rather than the risk of becoming criminals that children raised by non–criminal parents in general do. And when we calculate how much more likely an identical twin is to have a criminal status similar to their twins’ and we compare that to the likelihood that a fraternal twin will have a criminal status similar to their twins’, not only do we get numbers that line up with exactly what we would expect if there were a genetic component at play, we get estimates of the heritability of criminal tendencies that lines up exactly with what was already being found by the adoption studies. And so on.

The truly important point here is this: we know that violent and criminal behavior are heritable, regardless of how extensive our knowledge is of what the particular genes are or how they make their contribution to criminality. We know this by the same means we know that everything else we know to be heritable is heritable: by studying whether adopted children become more like their adoptive or biological parents as they grow into adults, by measuring how much more similar identical twins are on given traits than fraternal twins, by measuring how similar identical twins who were raised apart are compared to random members of the population, and so on.

The twin studies find that: “Genetic factors, but not the common environment, significantly influenced whether subjects were ever arrested after age 15, whether subjects were arrested more than once after age 15, and later criminal behaviour. The common environment, but not genetic factors, significantly influenced early criminal behaviour. The environment shared by the twins has an important influence on criminality while the twins are in that environment, but the shared environmental influence does not persist after the individual has left that environment.” What this means is that while being raised by criminal parents might make a child more likely to commit a criminal action as a very young teen, it has zero impact on a child’s likelihood of becoming a criminal as a (young) adult. Meanwhile, exemplary adoption studies find that adoptive children with criminal biological mothers have a 50% chance of later criminal behavior, compared to just 5% for the adopted children of non–criminals. Again, some of the best introductions can be found at Quillette: How Criminologists Who Study Biology Are Shunned By Their Field, and Criminology’s Wonderland: Why (Almost) Everything You Know About Crime is Wrong.

It’s not necessarily clear just what is being inherited when criminal tendencies are passed on, and we’re far from any complete knowledge of the range of genes involved. However, science is increasingly closing in on the answers.

We have identified a variety of genes that influence biological features which we know to play a role in criminal behavior. We also know that some of these genes are present in different ethnic groups in almost exactly the proportions at which these populations are represented in violent crime (higher in blacks, and lower in Asians).

In 1993, we discovered a condition now known as Brunner syndrome. Brunner syndrome was first identified in a single Dutch family, all of whom were found to react to perceived provocation with extreme aggression; 5 were arsonists, 5 had been convicted of rape and/or murder. It turned out that all 14 males originally studied had a mutation that caused the complete eradication of an enzyme called MAOA, which is responsible for breaking down neurotransmitters inside of the brain, including dopamine and adrenaline. Other research soon confirmed that you could even knock this same gene out in mice and produce the similar kinds of aggression.

While Brunner’s syndrome is incredibly rare, with just three families across the world now known to contain victims of the disease, the rest of the human population has genes coding for either low, medium, or high levels of MAOA activity (either the 2–repeat, 3–repeat, or 4–repeat alleles, respectively). [Note: the established convention is to use the term “MAOA–L” to refer to either the 2–repeat or the 3–repeat genes, but by grouping the “low” and “medium” activities together, this convention underscores just how significant the difference between all three really is.]

Early research found that people with low–activity MAOA genes were more violent if they had difficult upbringings—but as the research continued, it confirmed that people with low–activity MAOA genes were indeed significantly more violent regardless of their childhood experiences. Other research, in fact, continued linking the same gene to things like credit card debt and even obesity—all behaviors which revolve around impulsiveness.

The 2–repeat version of the gene was found to double the risk of violent deliquency in young adulthood compared to the other two variants. And guess what? The 2R allele is found in “5.5% of Black men, 0.1% of Caucasian men, and 0.00067% of Asian men”—which just so happens to correspond eerily to ethnic rates of violent crime. And lest anyone worry that low activity MAOA genes merely correlates with violence because black Americans are more violent and also just coincidentally happen to have more of them, other research has looked at black Americans with and without low activity genes and still found substantially more violence in 2R carriers.

(For rebuttal of common criticisms of MAOA studies, see the archives of The Unsilenced Science).

Similarly, the potential “triggers” for someone with low–activity MAOA genes turning violent (particularly carriers of the 3–repeat, which is somewhat less associated with violence on its own) expanded to include testosterone—and testosterone levels differ by race as well. A 1986 study found that the “twofold difference in prostate cancer risk” between black and white men could be explained by the “15% higher testosterone level” found in Black men.

But circulating levels of testosterone are not the only variable of interest. Many other factors, including enzyme activity and hormone exposure in utero, influence the impact of circulating hormones as well—and on these measures, too, we find generally consistent patterns in which Black subjects have the most androgenic hormone profile while East Asian subjects have least, with White subjects somewhere inbetween. A 1992 study found that “white and black men had significantly higher values of 3 alpha, 17 beta androstanediol glucuronide (31% and 25% higher, respectively) and androsterone glucuronide (50% and 41% higher, respectively) than Japanese subjects”—these being enzymes that convert testosterone into the more physiologically active hormone DHT.

Even further, It Is Not Just About Testosterone tells us that: “Vasopressin synthesis and the aromatization into estradiol both serve to facilitate testosterone’s effects.” So, guess what? “Vasopressin secretion in normotensive black and white men and women on normal and low sodium diets” found that “24-h urinary excretion of vasopressin was significantly (P<0·05) higher in men than in women and higher (P<0·05) in black than in white subjects.” And other studies confirm that Black children are exposed to higher hormone levels in utero—this one found “higher testosterone [and] ratio of testosterone to SHBG … in African–American compared to white female neonates”.

This last study is very significant.

We know that hormone exposure in the womb has drastic impacts on future behavior: Girls with congenital adrenal hyperplasia, a condition that only briefly spikes the level of hormones a developing girl is exposed to, have significantly more masculine behavioral traits despite the fact that there is no evidence that parents treat them any differently, or that there is anything different about them or the way they are “socialized” other than excess prenatal male hormone exposure. As found in a 2003 study of “Prenatal androgens and gender-typed behavior”, girls with CAH “were more interested in masculine toys and less interested in feminine toys and were more likely to report having male playmates and to wish for masculine careers. Parents of girls with CAH rated their daughters’ behaviors as more boylike than did parents of unaffected girls. A relation was found between disease severity and behavior indicating that more severely affected CAH girls were more interested in masculine toys and careers. No parental influence could be demonstrated on play behavior, nor did the comparison of parents’ ratings of wished for behavior versus perceived behavior in their daughters indicate an effect of parental expectations. The results are interpreted as supporting a biological contribution to differences in play behavior between girls with and without CAH.”

There is no reason to think that if out–of–wedlock birth and violent crime were to correlate due to genes, this would have to be because both behaviors are influenced by the same genes. It could be that the separate genes which contribute separately to each behavior just happen to correlate as well, with people who carry the first set of genes often carrying the second. However, it is at least plausible that the factors briefly identified here (testosterone and MAOA) actually could play a common role in producing both violent behavior and out–of–wedlock birth.

To my knowledge, outside the finding that persons with Brunner’s syndrome can be prone to hypersexuality, MAOA has never been studied in relation to sociosexuality directly. However, it seems fairly safe to infer that the kind of impulsivity which would lead a person to rack up credit card debt, or eat their way to obesity, or commit impulsive violent crimes, would also leave them prone to impregnate someone they haven’t married or end up divorced. And as far as testosterone, the studies on that one are clear: “people’s orientations toward sexual relationships, in combination with their relationship status, are associated with individual differences in testosterone.” More specifically, in “chaste” individuals with a restricted sociosexual orientation, testosterone rises when single but falls after acquiring a partner—but this doesn’t happen for those with an “unrestricted” orientation: as this study describes it, “partnered men who reported greater desire for uncommitted sexual activity had testosterone levels that were comparable to those of single men; partnered women who reported more frequent uncommitted sexual behavior had testosterone levels that were comparable to those of single women.”

Beyond that, we know that psychopathy both has a biological basis (psychopaths have a lower physiological response to their environments; in other words, it takes more to stimulate them) and is heritable, and we know that psychopaths “are twenty to twenty-five times more likely than non-psychopaths to be in prison, four to eight times more likely to violently recidivate compared to non-psychopaths, and are resistant to most forms of treatment” with “93% of adult male psychopaths in the United States in prison, jail, parole, or probation.”

We also know psychopaths are more likely to seek casual sex and avoid relationships—inevitably producing greater out–of–wedlock birth rates. The evidence, then, that unstable childhood environments produces criminals is weak—while unstable environments may raise the risk of criminal behavior during childhood, that effect only barely lasts into adulthood, if at all. In contrast, the evidence that there are genes which predispose a person to commit violent crimes, produce children out of wedlock, and divorce, and that these are passed on genetically to the children produced by these relationships regardless of their upbringing, is very strong. The facts aren’t particularly favorable to religious social conservatives, liberals, or mens’ rights activists: poverty doesn’t seem to be the primary cause of crime, but neither is single–parenthood (for religious social conservatives) or a lack of fathers (for MRAs).

Could the rate of psychopathy differ by race as well? I don’t know, but I was able to find some small indication that it might: judgment and the ability to discern smells are both localized to the frontal lobes of the brain, and research has linked poor sense of smell to psychopathy and aggression. Meanwhile, other research finds that men on average have a worse sense of smell than women—and blacks on average have a worse sense of smell than whites. (Update: See Razib Khan’s discussion of Lynn’s 2002 paper ‘Racial and ethnic differences in psychopathic personality’ and Skeem’s 2004 critical meta–analysis ‘Are there ethnic differences in levels of psychopathy?’).

Another study, this time in Finns, found that in addition to MAOA–L, a mutation of another gene known as CDH13 was heavily linked to extreme violent crime—and the effect of combining the two genes was more than additive. Meanwhile, a study in white and Hispanic Americans linked CHD13 to “a younger age of sexual debut”.

To reiterate, we don’t fully understand what is being inherited when criminality is inherited. But we know that criminality is highly heritable, no matter how well we do or do not understand the mechanisms of that heritability, through the converging results of years of twin, adoption, and family studies which all produce the same conclusion. The truth of this knowledge does not depend on the relevance of MAOA, testosterone, or psychopathy genes in particular, although I happen to think that very strong cases can be made for all of them. Likewise, we don’t fully understand what it is that is being inherited when promiscuous tendencies are being inherited, but we know that promiscuity is highly heritable all the same. But even the cursory evidence that behavioral genetics has produced so far at this point in time suggests several known mechanisms that just might not only be the culprits, but might even explain why some behaviors (like out–of–wedlock birth) tend to correlate with others (like violent criminality).

Divorce and out–of–wedlock birth may produce behavioral problems, but for the most part sociosexual behavior in parents and children correlates because of genes, not experiences; and the behavioral problems that result from divorce and out–of–wedlock birth per se appear not to last beyond childhood. There is perhaps a tiny impact of poverty on property crime—but none on violent crime. Genes are not deterministic, but the strongest verifiable impact by far out of all measurable impacts is that of genetic heredity on behavior.

A few disclaimers would, in an ideal world, be able to go without saying: the vast majority of men are neither violent criminals nor psychopaths; likewise, the vast majority of black people are neither violent criminals nor psychopaths. Nowhere in any of this reasoning should license be taken for blanket prejudice against all men, or against all blacks. The baseline rate of risk matters. Even if a man (or black) is 10x more likely to murder you on the street than a woman (or white), if your actual risk of being murdered by a man (or black) is 0.0001% and your actual risk of being murdered on the street by a woman (or white) is 0.00001%, then this hardly justifies viewing all men (or blacks) with suspicion and giving all women (or whites) a free pass. It is a minority of all people who are violence–prone. It is a minority of all men who are violence–prone; and it is a minority of all blacks who are violence–prone. But the minority of men who are violence–prone appears to be larger than the minority of women who are violence–prone, and the minority of blacks who are violence–prone also appears to be larger than the minority of whites, which in turn appears to be larger than the minority of Asians, who are violence–prone. And though no one explanation reveals everything, the strongest explanation of all explanations we do have is hereditarian. If stating these facts makes me racist, then it apparently also makes me twice as sexist—against myself—because the gap between men and women in violent crime is even larger than the gap between blacks and whites.

Is there anything that we can do with this sort of information? Much of the resistance that will form against an explanations of undesirable social phenomena that gives genes a larger role than environment, I believe, comes from the impression that if genes are responsible, then there’s nothing we can do about it—it seems to be a recipe for resignation. And even if eugenics could, in theory, improve human outcomes and behavior, few of us would want to come anywhere near trusting the State with the amount of power it would take to attempt it (I certainly wouldn’t).

Fortunately, it isn’t true. I’ll be discussing the various ways in which the ordinary policies we already contemplate can be evaluated for their “eugenic” and “dysgenic” impacts in the future—and how the policies that would turn out to be most beneficial in light of this analysis fit neatly into neither “conservative” nor “liberal” boxes. For example, if IQ or conscientiousness are heritable traits, then establishing maternity leave—a stereotypical “liberal” ideal—may help to encourage women with higher IQs and conscientiousness to have more children, rather than forego having more children for the sake of their careers—thus helping increase the IQ and conscientiousness of the general population, and no specter of violent Nazi concentration camps need be feared. Some warrior gene researchers have suggested another idea they think that their evidence warrants: preventing former violent criminals from purchasing alcohol, because the association between MAOA–L and violence is often mediated by alcohol

Whether that proposal would actually be effective or not, it’s an excellent example of the kind of idea we can start to try to think about, if the case laid out here is true. For another example, we can use our knowledge of the relationship between criminal behavior and genes to base sentencing lengths around verifiable statistics on the risk of re–offense. If anyone is afraid that an idea like this could be prone to abuse, they should remember that the current system already is rife with abuse: judges give around 65% favorable sentences soon after either of their two daily food breaks, but after each break and before the next that number falls to nearly 0%. Alternatives to a system that condemns a person because of how recently a judge ate lunch can hardly get much worse.

Just as identifying that the environment is the cause of some phenomena can allow us to start figuring out interventions which help reduce the impact of that environmental cause, so identifying genes as the cause of some phenomena can allow us to start targeting interventions there, too. When it comes right down to it, the fear that acknowledging biological roots to human behavior must end in violent dystopia is simply bizarre, as soon as we consider the fact that so many of the worst massacres of the 20th century were committed by blank slatists who believed just exactly the contrary—that human nature could be reformed through social control to their will.

As Christopher Szabo at Intellectual Takeout asks, “Why are we so understanding towards the crimes of communism?” Including the death toll from famines, many of which were in fact engineered intentionally, the death rate from Maoist communism was about 1.92 million killed per year (across 38 years for a total of around 77,000,000). In contrast, the death rate from Hitler’s Reich was about 1.75 million killed per year (across 12 years for a total of around 21 million). And as Mao literally wrote, “In class society there is only human nature of a class character; there is no human nature above classes.” So if acknowledging a biological basis to human behavior is supposed to be discredited because it evokes the massacres of the Nazis, why shouldn’t denying it be discredited because it evokes the massacres of the Communists? I don’t seriously believe that sociologists who promote social constructionism should be tarred by association with Communist genocides, but unless you want to admit that that whole line of reasoning is bullshit, turnabout is fair play.




A Footnote to “Is the War on Drugs Racist?”

In that essay, I quoted Frank Zimring’s position on the impact of the war on drugs on violent crime as so: He also argues (pp.90–99), correlating hospitalizations and deaths from overdose with changes in the known street price, that overall use of cocaine appears to have remained relatively constant across the period of time in which New York City’s crime drop took place. Yet, he notes (pp.91–92) that “The peak rates of drug–involved homicide occurred in 1987 and 1988”—the same year that 70% of arrestees were found to test positive for cocaine—“and the drop in the volume of such killings is steady and steep from 1993 to 2005. … The volume of drug–involved homicides in 2005 is only 5% of the number in 1990.” Meanwhile, whereas 70% of arrestees in the late 1980s tested positive for cocaine, by 1991 (see table 2 on page 14) this number hit a low of 62%—and in 1998 it had fallen all the way to 47.1%. By 2012 (see figure 3.7 on page 45) this number fell even further to 25%.

What happened here? Why would drug use amongst arrestees fall if drug use as a whole remained constant? Zimring has an important answer: “If I’m a drug seller in a public drug market and you’re a drug seller in a public market, we’re both going to want to go to the corner where most of the customers are. But that means that we are going to have conflict about who gets the corner. And when you have conflict and you’re in the drug business, you’re generally armed and violence happens. … Policing … [helped drive] drug trade from public to private space. … [this] reduced the risk of conflict and violence associated with contests over drug turf. The preventive impact [of these policies] on lethal violence seems substantially greater than its impact on drug use. … [And] once the police had eliminated public drug markets in the late 1990s, the manpower devoted to a special narcotics unit [whose funding had increased by 137% between 1990 and 1999] dropped quite substantially [and yet the policies’ impacts on homicide rates remained].”

However, Zimring is clearly incorrect that the drug war reduced drug–involved homicides without reducing drug use as a whole—the drug war reduced drug use, too.

Quoting James Q. Wilson in the Wall Street Journal in 2011: “Another shift that has probably helped to bring down crime is the decrease in heavy cocaine use in many states. … Between 1992 and 2009, the number of admissions for cocaine or crack use fell by nearly two-thirds. In 1999, 9.8% of 12th-grade students said that they had tried cocaine; by 2010, that figure had fallen to 5.5%.

What we really need to know, though, is not how many people tried coke but how many are heavy users. Casual users who regard coke as a party drug are probably less likely to commit serious crimes than heavy users who may resort to theft and violence to feed their craving. But a study by Jonathan Caulkins at Carnegie Mellon University found that the total demand for cocaine dropped between 1988 and 2010, with a sharp decline among both light and heavy users. … Drug use among blacks has changed even more dramatically than it has among the population as a whole. As Mr. Latzer points out—and his argument is confirmed by a study by Bruce D. Johnson, Andrew Golub and Eloise Dunlap—among 13,000 people arrested in Manhattan between 1987 and 1997, a disproportionate number of whom were black, those born between 1948 and 1969 were heavily involved with crack cocaine, but those born after 1969 used very little crack and instead smoked marijuana.

The reason was simple: The younger African-Americans had known many people who used crack and other hard drugs and wound up in prisons, hospitals and morgues. The risks of using marijuana were far less serious. This shift in drug use, if the New York City experience is borne out in other locations, can help to explain the fall in black inner-city crime rates after the early 1990s.”

Thus, because “drug use among blacks has changed even more dramatically than it has among the population as a whole”, if the black:white ratio of those in prison for drug use is larger than the black:white ratio of drug users in the general population, this may be because much of the disproportionately black number of users of cocaine have already been arrested—to the benefit of the black population as a whole.

Similarly“In a recent article in the American Sociological Review, my colleagues and I [Gary LaFree] found that a proxy measure of crack cocaine had a greater impact on big city crime than more common measures like unemployment.”

A 1994 study by Eric Baumer found that “… arrestee cocaine use has a positive and significant effect on city robbery rates, net of other predictors. The effect of arrestee cocaine use on homicide is more modest … [but] cocaine use elevates city violent crime rates beyond levels expected on the basis of known sociodemographic determinants.” And a 1997 Justice Department study found that “there was a very strong statistical correlation between changes in crack use in the criminal population and homicide rates … In five of the six study communities, … homicide rates track quite closely with cocaine use levels among the adult male arrestee population.”


On “Privilege” in Prison Sentencing: A Crash at the Intersection

Throughout this series, I’ve shown interest in taking popular claims of “institutional” racism against blacks, and exploring evidence which reveals that contrary to common impressions, little to no racism is involved in the institution in question at all. In Are African–Americans Disproportionately Victimized By Police?, I provided an original analysis by asking exactly how many violent crimes white and black suspects have to commit before one white or black suspect interacting with police is shot—and there I found that, per crime and therefore per encounter with police, whites are in fact almost two times more likely to be shot by police than blacks. It turns out that black suspects only appear to be more likely to be shot by police because they are more likely to interact with police to begin with because they commit more violent crimes. Control for that, and the policing bias against blacks not only disappears; it actually does in fact reverse and count against whites. (And I’d like to note that even if you already knew that blacks commit more crimes than whites, it wasn’t a pre–given that these numbers would be sufficient to produce such a reversal—it could have been the case that part of the gap in police shootings was explained by interaction rates, and part by racist bias. But in fact, there is absolutely nothing left over for racist bias to account for once those numbers are ran; and both the raw numbers corrected for violent crime rates and the best experimental study data in fact reveal a bias against whites–or, in other words, in favor of blacks.)

Similarly, in Violence Against Women and Violence Against Truth, I exposed the absurdity of a mainstream feminist publication which claimed that the fact that rates of assault and murder have fallen relatively more for men than they have for women over the past few decades is evidence that “deep structural gender inequities … marginalize women” without making any acknowledgment of the fact that across recent history 3 out of 4 victims of violent crime have been men to begin with. If anyone is being “marginalized” here, it is men: even after these disproportionate reductions in victimization by violent crime of men and women which Ms. Magazine bemoans as women failing to have “anywhere near parity rights to physical freedom and security”, in 2013, for every 2198 men victimized by violent crime, there were only 2097 women. The fact that even mainstream feminist publications can get away with a claim like this shows just how dangerously distorted so–called “feminist” reasoning can be—and how little other self–proclaimed feminists are doing to correct this kind of reasoning when it appears within their midst.

So, I might have created the impression that my aim throughout these posts is simply to debunk all claims of racism or sexism whatsoever. In this post, I’d like to address an issue that gives me the opportunity to demonstrate that this is not the case—but continue demonstrating the solidity of my actual thesis: prison sentencing length.

The worldview of the left–wing social justice warrior could well be summarized by a speech given by the novelist John Scalzi in which he claimed that if life were a role–playing game, “white male” would quite simply be the easiest difficulty setting, whereas “minority female” would be equivalent to setting the difficulty to “hardcore.” According to this scheme, one person can still have more success on the higher difficulties than another person has on a lower one, but there is a linear and one–dimensional increase in penalty regardless of one’s individual skills and talents as one moves away from “white” and “male.”

The model I’d like to propose, in contrast to this, is that setting one’s race and gender in life is less like choosing a difficulty setting in a video game than it is like choosing a class: Mage, Archer, or Warrior?

Each character type will have different advantages and disadvantages against other character types in different contexts—and especially because each class is born in a different kind of environment in the first place, these differences can’t really be measured in a way that allows them to be ranked against each other on any single–dimensional scale. In one environment with high elevation weakening the strength of the mage’s connection to the earth and thus his power but with steep mountains making travel more difficult yet allowing the archer to perch himself atop a steep hill, the warrior may win against the mage and lose to the archer. But in another environment with level plains and a low elevation, the warrior may win against the archer who can’t find high ground on the plains, but lose to the mage whose connection to the earth is at its peak.

Which class is “better”? There really is no objective way to say.

Looking at things this way need not require us to dismiss all claims of discrimination against women and minorities—but it also allows us to reject the ridiculous view that white males are simply sitting snidely together atop a superficially measured social pyramid of unjust privilege. The question is then not whether there are ever any penalties for being non–white or female in any contexts or circumstances, but rather how they compare in scale to the penalties which can also exist for being white or male in others—and my claim is that once we perform this kind of analysis more rigorously than mainstream sociology traditionally has, these advantages and penalties are roughly similar: in other words, no one comes out as the obvious winner of the “Oppression Olympics”.

We have already seen that the popular view that whites have a profound advantage over blacks when facing police, in fact, has it exactly backwards: once the fact that black suspects simply do in fact commit more violent crime is taken into account, it turns out that any given black suspect facing police has an advantage over any given white—the white suspect is actually almost twice as likely to end up shot. Similarly, in part 3 of the “Is Dylann Roof ‘White Like Me?’” series I discussed the fact that not only are men more likely to be raped in prison than women are, but white men are much more likely to be raped than non–whites—and almost always by black men, according to mainstream sociological research stretching back for decades. Like it or not, these are “institutional” forms of suffering which give people facing police or spending time in prison penalties both for being white and/or male.

But the analogy of choosing a class in a role–playing game also allows me to clarify how my view differs from that of many conservatives, and of so–called mens’ rights activists, who share my interest in this same set of facts: in sum, these perspectives often simply invert John Scalzi’s argument, without rejecting its fundamentals. According to their schemes, being white or male work just the same way that being non–white or female do in the left–wing social justice warrior’s scheme: they give one disadvantages, plain and simple. My class analogy should make it clear that I consider this view misguided and wrong for exactly the same reasons that I consider the left–wing view misguided and wrong: no demographic is an unequivocal victim in modern society; the advantages and penalties faced in each case are simply different. So, this post will start with me granting the existence of a case of “institutional” racism.

This brings me back to the matter of sentencing lengths. Once the cops have already shown up, and once we’ve already arrived at a conviction, how long are the sentences that different people tend to receive for similar crimes? Is there a penalty (or privilege) for the convicted criminal based on his (or her) race?

From the looks of it, there is. A 2013 study conducted by the U.S. Sentencing Commission found that blacks’ sentences were about 15%  longer than whites’. Much of the early research which found much higher disparities than this failed to take account of the fact that black defendants, on average, have longer rap sheets—which clearly factors in to judicial decisions. A 2012 study conducted by Sonja Starr which controlled meticulously for previous record found a gap of about 10%, to the disadvantage of blacks. Another 2013 study conducted by Beaver, et al. found “no evidence of racial discrimination in criminal justice processing” once matching defendants for self–reported lifetime violence and IQ—so perhaps what often happens is that defendants of lower intelligence behave differently in the courtroom, and judges pass sentences in response to these behavioral differences, not race itself.

But what often happens needn’t be what always happens, and it would be hasty to dismiss the entire literature on the sentencing gap out of hand without a deeper investigation. In a review of the evidence compiled for the time between 1980 and 2000, Tushar Kansal of The Sentencing Project writes that “32 state-level studies contained 95 estimates—meaning 95 different ways in which these studies sought to determine whether sentencing decisions were biased—of the direct effect of race on sentence severity[…, and] 43.2% [of these] indicated harsher sentences for blacks… 8 studies of the federal system contained 22 estimates of the direct relationship between race and sentence severity[…, and] over two-thirds (68.2%) [of these] indicated harsher sentences for blacks ….” In other words, for every ~9 state or federal studies which fail to find a racial disparity against blacks, there are 11 studies which do. The most plausible explanation is that rather than it being a coincidence that 55% of all studies find a racial disparity against blacks, with 45% mostly finding null results, there is a racial disparity—in some but not all times and places. Kansal concludes with an admission that “Despite the findings of the cited studies in the area of direct racial discrimination, a number of factors indicate that the presence of direct discrimination is not uniform and extensive. Some of the state-level studies found no evidence of direct racial discrimination, and many of those that did find evidence of direct discrimination concluded that it exercised relatively modest effects, increasing the likelihood of a minority being sentenced to prison by only a few percentage points. ”

For obtaining an estimate of how large the typical racial sentencing gap is, then, Sonja Starr’s 2013 finding of a 10% gap between blacks and whites represents the middle ground—less than the U.S. Sentencing Commission’s 15%; but more than the few percentage point, null, or reverse (that is, favoring blacks and disfavoring whites—of which there were 6) findings reported by 44.3% of the Sentencing Project review’s collection of studies.

Now, what happens when we ask the same question about gender?

Intersectional theory” is the term for the attempt within sociology to account for the fact that race and gender contribute interacting effects to one’s social disadvantages, rather than investigating each factor in isolation. The consensus is, of course, that being non–white and being non–male are always interacting disadvantages—so that in reality, as one Tumblr author who describes herself as a “genderfluid femme Black mixed bitch” writes, what intersectionality was really always about is “exposing the ways Black women are caught up in multiple systems of oppression … it is meant to help Black women understand their experiences in a white supremacist patriarchal culture [e.g., which is set up to privilege whites and men and therefore especially white men and punish non–whites and non–males and therefore especially minority women] like the U.S.”

The real findings, however, take the intersectional logic and turn that expectation flat on its head.

From 2001–2006, three studies ([1], [2], [3]) all found a gender sentencing gap of about 10%.

But guess who that gap favored?


Already, the gender sentencing gap favoring women is at least as large as the racial sentencing gap—and therefore just as important.  The second of these studies, published by Max Schanzenbach, notes something interesting: “The findings regarding gender in the case of serious offenses are quite striking: the greater the proportion of female judges in a district, the lower the gender disparity for that district … These results are hard to square with the suggestion that unobserved accomplice status or blameworthiness is behind the gender disparity. [However,] appointing more black judges to the bench is unlikely to reduce sentencing disparities for black offenders who commit serious crimes.” In other words, the three takeaways here are: (1) the theory that men organize society to create distinctly male privileges appears in the case of prison sentencing to be absolute bullshit, because when men are given power to hand out sentences, they privilege women, not men; (2) it is unlikely that this sentencing gap is a result of other unobserved variables differentiating the crimes committed by women and men, because male judges give female offenders privileges in sentencing that female judges don’t; and (3) this does not appear to be the case for black/white judges handing out sentences to black/white offenders—black and white judges do not appear to sentence black or white defendants very differently, which at least suggests that most black and white judges are usually responding to the severity of crimes committed when they sentence black and white defendants differently.

By these measures, again, the gender gap in sentencing favoring women is exactly as large as the racial gap in sentencing favoring whites. However, in the Booker vs. United States ruling passed down by the Supreme Court in 2005, it was decided that sentencing guidelines requiring that men and women who committed the same crimes and held similar criminal records be given equally long sentences were to be considered recommendations, rather than requirements—and the evidence shows that the gender sentencing gap grew as a result of this decision. A 2007 study by Supriya Sarnikar  found an even larger figure than the aforementioned three: “We find that women receive prison sentences that average a little over 2 years less than those awarded to men. Even after controlling for circumstances such as the severity of the offense and past criminal history, women receive more lenient sentences. Approximately 9.5 months of the female advantage cannot be explained by gender differences in individual circumstances. In other words if women faced the same sentencing structure as men, women would on average receive 15.4 months less prison time than men rather than 24.9 months less prison time.”

But importantly, Sarnikar makes this note in her conclusion: “[O]ur data permit us to examine only the end stage of the criminal justice system. A more comprehensive treatment would take account of the fact that before arriving at the judge for sentencing, a defendant must also pass through a jury or possible plea bargain with a prosecutor.”

And that brings us back to Sonja Starr.

In 2012, Starr decided to present an analysis of the gender sentencing gap which accounted for decisions made in the earlier stages of the criminal justice system, just as she had done recently to arrive at her 10% figure for the racial sentencing gap. Her study incorporated data collected from the U.S. Marshals’ Service (USMS), the Executive Office of U.S. Attorneys (EOUSA), the Administrative Office of the U.S. Courts (AOUSC), and the U.S. Sentencing Commission (USSC) spanning from 2001–2009 (containing periods both before and after the equal sentencing for equal crime guidelines were ruled recommended rather than mandatory) and controlled extensively for the severity of crimes committed and the previous criminal records of offenders.

The result? On average, men receive a sentence that is 60% longer than the one a woman would receive for the same crime—and Starr notes that even this number is, in fact, an underestimate, because the average male prison sentence is driven downwards by the presence of men receiving relatively short sentences in cases where women would receive no sentence at all. But already, this is six times as large as the  racial sentencing gap of 10% which Sonja Starr had recently found by applying exactly the same methodology to race.

To be specific, the gender gap varies by race: among white offenders, men receive sentences 51% longer than womens’; among black offenders, men receive sentences 74% longer than womens’. But whereas this drags black mens’ sentences down compared to white mens’, it also pulls black womens’ back up—with the end result that black women have the largest advantage in prison sentencing length of all! So much for intersectionality as a method for “exposing the ways Black women are caught up in multiple systems of oppression … in a white supremacist patriarchal culture.” How long before we hear social justice warriors asking black women to check their prison sentencing privilege?

To help visualize this, I’ve created a very oversimplified chart to represent the rough amount of “privilege” each group receives in prison sentencing by starting everyone at 500 points, creating a racial gap of 50 points (10% of 500; so add 25 points if white, and substract 25 points if black), a white gender gap of 255 points (51% of 500; so add 127 points if white female, and subtract 127 points if white male), and a black gender gap of 370 points (74% of 500; so add 185 points if black female, and subtract 185 points if black male). Black men get 290 points; white men get 398; white women get 652; and black women get 660. To translate that into years, if four people all committed identical crimes, then a black man would spend 7 years and 7 months in jail, a white man would spend 6 years and 8 months in jail, a white woman would spend 4 years and 7 months in jail, and a black woman would spend 4 years and 6 months in jail. Lest you think I’ve calculated this incorrectly and arrived at the finding that black women receive the most “privilege” in sentencing by mistake, note that on p.16 of her study (under section 3.6. Race-Gender Interactions), Sonja Starr notes that “among women, the race gap [is] reversed in sign.”

ChartGo (2)

What this means, even given the fact that the gender gap is larger amongst black offenders than it is amongst whites, is that black men in the criminal justice system share more in  oppression with white men on account of being male than they share in oppression with black women thanks to their race—and white men share more in oppression with black men on account of being male than they share in privilege with white women thanks to their race. If we want to improve the situation of black men in prison, then we should want to make their sentences more like black womens’ rather than more like white mens’.

To describe the implications still yet a few other ways, white men and white women are not united in shared privileges—white women and black women are; whereas it is white men and black men who, relatively speaking, are united in shared “oppression”. Rather than finding white men sitting atop the social pyramid, we find black women there instead. And even for black men, clearly the most disadvantaged group of all, addressing the sentencing length penalty for being male should be a much higher priority—about six times higher—than addressing the sentencing length penalty for being black. So in this case, whose efforts do black men need most? Supposedly anti–racist “intersectional” feminists would have it that the racial disparity should be everyone’s priority—but if mens’ rights activists were to instead successfully eliminate the gender sentencing gap first, this would benefit actual living and breathing black men much more than eliminating the racial sentencing gap.

Even where so–called “intersectional feminists” have something right (that the effects of race, gender, and other variables interact), they’ve still failed to bring us the whole picture. While nominally professing concern for the fact that “the patriarchy hurts men too”, it seems as though they would often rather fight wage gaps that don’t exist (except as the result of voluntary career choices and preferences) and narcissistically find ways to make men becoming somewhat less of a majority of the victims of violence a womens’ problem than work to bring to the public’s attention that the relatively little–known gender gap disfavoring men in prison sentencing is six times larger than the well known racial gap disfavoring blacks.

Perhaps that’s because that evidence so drastically undermines the “white male is the easiest difficulty setting; black female is ‘hardcore’” model of American social life by, at least in this instance, turning it all the way around on its head. So, not only are individual white men who commit a crime more likely to be shot by police than individual black men who commit a crime—and not only are white men in jail more likely to be raped (most likely by black men)—but even at the level of sentencing, white men are almost as disadvantaged in sentencing compared to both black and white women as black men are. And even here where black men are the most disadvantaged of all, and that is an “institutional” problem that should be addressed, they would still be more helped by an effort to reduce the gender penalty which also hits white men than they would by an effort to reduce the racial penalty, which only trivially hits black women (who are still despite that penalty by far the most privileged anyway).

One conclusion discoveries like these has brought me to is that it should now be perfectly reasonable to have college classes, think tanks, or student groups dedicated to examining the problems faced by whites, or men—and I stand by that even though I think much that is, or would be, produced by such groups is bunk. Why? First, because a great deal of what is produced by womens’ studies and minority studies is already bunk—but at least a variety of different focus groups could criticize each other well enough to exercise checks and balances against each others’ ideological excesses and help bring awareness of flawed arguments made by the other side to our attention.

But second, because just as there are valid issues faced by women and minorities even amidst the trash often produced by “feminist” and “anti–racist” groups and publications, so there are valid issues faced by men and whites amidst all the trash that “mens’ rights” or white–focused groups would inevitably also produce. Men (or whites) are not “the” oppressed segment of society without exception today any more than women (or non–whites) are. But the types of “oppression” faced by each of these groups are not massively different in kind—and were it not for the fact that feminist and anti–racist organizations alone are allowed to dominate the narrative airways and spread perspectives which are often flawed in ways that few will take the time to deeply examine and of which few people will ever even hear well–informed critiques, the suggestion that groups with opposing perspectives have a valid place would not sound so absurd. The reason we don’t know or hear about the facts which might go some way to validate their existence is because they aren’t here to give them to us. “Intersectional” anti–racist feminists can’t be trusted to do it sufficiently; and that goes for black men, too, whose need for the kind of analysis of the gender sentencing gap that feminists aren’t doing here is even more grave than white mens’ need for it is.



Do “Right–Wing Extremists” Kill More People than Islamic Terrorists?

On November 30, 2015, amidst the debate over what should be done about Syrian refugees when one in eight Syrian refugees openly express sympathy with ISIS/ISIL/DAESH, ThinkProgress published an article titled “You Are More Than 7 Times As Likely To Be Killed By A Right–Wing Extremist Than By Muslim Terrorists”. Lest anyone be left unclear what they’re trying to say, the byline reads: “The face of terrorism in the United States is white.” As AddictingInfo puts it: “the true terrorist threat: crazy white people.”

Now, these left–wing media outlets may imagine they are attempting to “counter” what they perceive as bias against non–whites, but they have increasingly begun falling off the deep end of blatant misinformation and irrationality in what amounts, in practice, to an extreme bias against mainstream whites. Whether this has occurred as a result of “noble intentions” or not, it’s time for someone to set the record straight; and it’s time to prove that it doesn’t take a paranoid racist to agree with this paragraph—just the ability to read and perform a little basic math.

It obviously never occurred to these authors that white people might be more likely to kill you for the same reason that they’re “more likely” to eat at Mexican restaurants or buy hip–hop albums: the majority of the whole population is white, so everything is “more likely” to be done by a white person—even if white people do it less. This is utterly trivial, and it hardly means anyone is irrational to associate Mexican restaurants with Hispanics or hip–hop with black people instead of whites. Per the highest estimate I can find, Muslims as of 2014 (including native–born converts) were a mere 0.9% of the population. Since white people are currently 63% of the U.S. population, that means there are seventy times more white people in the United States than Muslims. If white people committed the same rate of domestic terrorism as is committed by Muslims, then, we’d expect white people to be seventy times more likely to kill you—there are seventy times more of them.

But that’s not what we find.

Instead, according to the article’s headline, they’re only seven times more likely.

Similarly, this count of the numbers killed in various types of terror attacks finds that non–Muslim attackers had a death toll twice as high as Muslim attackers—and again, since the non–Muslim population in the United States is far more than twice as high as the Muslim population (as Muslims are less than 1% of the population, the non–Muslim population is more than 99 times larger), it’s not clear how any of these outlets can possibly think that these facts downplay the relative risk from Islam–inspired attacks.

If, at 1% of the population, Muslims commit 1/7th the amount of terror attacks as the white 63% of the population does, that means that the Muslim population would only have to reach 7% in order for the amount of terror attacks committed by Muslims to be equal to the amount committed by the white population. And that means that if Muslims were 63% of the population, they’d commit nine times as many terror attacks as the currently white 63% of the population does. Yet, as we’ll see later as we continue to dig through the sources for ThinkProgress’ claim, even these numbers will turn out to dramatically understate the real disparity.

The data behind this claim …

was published by David Sterman of New America, and their full collection of “jihadist” and “right–wing extremist” attacks can be seen here.

The first thing that should be apparent is that according to their actual list, “jihadist” and “right–wing extremists” are already killing an equal number of people despite their population differences—not seven times as many. To revise the above numbers, that means that if Muslims were to become 63% of the population, they’d commit seventy times as many terror attacks as the currently white 63% of the population does.

If we try to control for political ideology, somewhere between around 4964% of whites lean conservative, so even if we leave aside liberal whites and ignore the fact that left–wing acts of terror take place as well, that’s a bare minimum of thirty times as many per capita terror attacks from the Muslim 0.9% of the population as come from the white conservative ~30% of the population.

But left–wing terrorism exists too.

In fact, in terms of acts of terrorism, it’s the majority. 

The Unbiased America Facebook page have summarized this nicely:


Based on sheer number of attacks, environmentalists and animal rights activists together committed 78 of the known total of 178 — about 44% of all attacks. (Do note that their dataset, from the Global Terrorism Database, differs from David Sterman’s data at New America in that it cuts off at the end of 2014, whereas Sterman’s data has been updated to include most of 2015—however, Sterman’s data doesn’t include any breakdown whatsoever of the ideology of non–jihadists whom it doesn’t classify as “right–wing extremists”.)

Now, environmentalists and animal rights activists are high on the list of numbers of attacks, but low on the list of casualties—true. But so are anti–abortion activists: out of 13 anti–abortion attacks across all 13 years, only 1 actual “anti–abortion” casualty accrued. And the environmentalists are still entering media outlet buildings and holding people at gunpoint while threatening to shoot them in order to spread the message that “All human procreation must cease!” even if they don’t actually have the nerve to go through with shooting anyone, so this isn’t all just innocent tree–hugging and property damage.

The two “atheist” attacks did center around property damage—as with the “Veterans United for Non–Religious Memorials” group’s placement of an IED on a Christian memorial. But one “anti–Muslim” attack committed less than two months later arguably deserves to be classified as an atheist terror attack, given that its perpetrator was Craig Stephen Hicks, a liberal fan of progressive causes ranging from “HuffPost Black Voices” to “Forward Progressives” to “The Atheist Empathy Campaign,” to Rachel Maddow and the Southern Poverty Law Center whose Facebook cover photo loudly proclaimed his identification as an “anti–theist” in bold capital letters. Thus, even “anti–Muslim” attacks cannot simply be assumed to have been committed by “right–wingers”. In fact, if we were to update the list through 2015 to include Craig Stephen Hicks and Robert Dear, atheist terrorists (at 3) would be only a single kill behind the tally for anti–abortion terrorists (at 4) from 2001 through 2015. As a non–religious person myself, this was absolutely surprising to me, so let me repeat it again: atheist terrorists have killed only one fewer person within the United States over the last decade and a half than anti–abortion activists.

Who attacks the police?

Anti–police attacks will be committed both by right–wing survivalists like Eric Frein and by people like Christopher Dorner, the black cop who killed two innocent relatives of a man who had petitioned on his behalf before setting out on a one–man guerrilla war against the LAPD.

The difference between liberal and conservative responses to Frein and Dorner is that no white conservative was heard saying of Frein that his continued run from police was “kind of exciting,” much less that he was “like a real–life superhero”, as viewers of CNN had the opportunity to hear from Marc Lamont Hill, the Distinguished Professor of African American Studies at Columbia University while Dorner’s killing spree was still ongoing. Meanwhile, the largest “Support Eric Frein” page on Facebook has 404 likes; the largest Facebook page in support of Christopher Dorner has more than 17,000.

Of all terror attacks against police: the first, in February of 2013, belongs to Christopher Dorner (who claimed four victims in total, two of whom were police officers—note that this data only lists one of his victims in its tally, since his designated category is “anti–police” but only two of his victims were police officers, and only one of them died immediately). Gregory Lynn Shrader mailed a bomb to Arizona Sheriff Joe Arpaio, apparently in hopes of framing an ex–business partner rather than out of any ideological motivation of his own. David Patterson planted three explosive devices around a West Virginia city building in hopes of orchestrating a shootout with the FBI. The next attack, claiming five victims, was committed by Jerad and Amanda Miller, apparently libertarian followers of Facebook pages like “Taxation is Theft” and “Cop Block”. The next act, claiming no victims or casualties or property damage of any kind, was committed by Douglas Leguin, a member of the Sovereign Citizens movement.

The next attack, in October of 2014, was committed by Zale Thompson—a recent convert to Islam—who killed one NYPD officer and injured three more individuals in a hatchet attack. Note that this attack is registered as “anti–police” rather than “Islamic” terrorism, despite the fact that Thompson was known to have frequented websites related to al–Qaeda, al–Shabab, and ISIS. In November of 2014, Larry McQuilliams claimed one victim in Austin, Texas. Finally, in December, Ismaaiyl Brinsley killed two police officers after tweeting “I’m Putting Wings On Pigs Today. They Take 1 Of Ours….. Let’s Take 2 Of Theirs …  #RIPMikeBrown … ” — obviously inspired by the Black Lives Matter movement’s completely distorted popularization of a case of justified self–defense, in an environment in which BLM protesters were chanting the phrase, “pigs in a blanket, fry ‘em like bacon.”

Of total anti–police attacks, then, 5 out of 8 — or 62% — were committed by whites—and this is not larger than whites’ 63% representation in the population. 1 out of 8 — or 13% — were Islamic, which is larger than Islam’s 1% representation of the U.S. population. And 3 out of 8 — or 37% — were committed by African–Americans, which exceeds blacks’ 13% representation in the population. Of total victims claimed, 7 were killed by white attackers (5 from Jerad and Amanda Miller; 1 from Larry McQuilliams; and 1 from Eric Frein), while a combined 7 were killed by black attackers (4 from Christopher Dorner, 2 from Ismaaiyl Brinsley, and 1 from Zale Thompson). Black attackers were thus responsible for 50% of deaths in this category—counting only those police deaths which formally classified as “terrorism”, and leaving out killings of police which happen in pursuit of a crime (which black suspects are also extremely disproportionately represented in).

Who attacks government or military targets?

Of a total of 49 attacks on “general” or “diplomatic” government targets, 2 were committed by the environmentalist Earth Liberation Front. Of 32 anti–government attacks taking place between November 12, 2001 and February of 2009, none claim a single fatality, and only one managed to claim a single injury.

In May of 2002, Luke Helder began a series of pipe bombings that were conducted along with messages about astral projection, as well as the illegality of marijuana—the message accompanying his bomb attacks read: “I?m here to help you realize/ understand that you will live no matter what! It is up to you people to open your hearts and minds. There is no such thing as death”, and made allusion to left–wing redistributionist arguments: “When 1% of the nation controls 99% of the nations total wealth, is it a wonder why there are control problems?”

Because of the different locations, these actually classify as the first eighteen events on the count. If we count these as one event, there were only 32 total attacks on government targets during the time period for which we have data after the September 11 attacks. (Let’s not use Luke Helder to artificially inflate the number of more–left–wing–than–right–wing attacks.)

After Luke Helder’s bombings, the first unique attack on government target was committed by Preston Lit, a mentally ill man off his medication who planted a small pipe bomb with a “Free Palestine” message containing scattered references to al–Qaeda. The bomb was defused, and Lit was sentenced to a federal prison psychiatric unit. Next, the Earth Liberation Front set fire to the U.S. Forest Service Northeast Research Station Pennsylvania, causing $700,000 in damage. Steve Kim, a U.S. born East Asian man, then fired shots at the U.N. building in Manhattan in October of 2002 along with what was described as a “rambling political message” about human rights in North Korea.

Then in March of 2003, Dwight Watson caused a several–hour long halt to traffic by driving his truck into a pond in order to call attention to federal policies he considered unfair to tobacco farmers. In October, a ricin–laced latter was sent to the Department of Transportation in Washington, DC threatening more ricin–laced letters if pending trucking legislation wasn’t passed by someone calling himself “Fallen Angel” and claiming to be the “fleet owner of a tanking company”; in November, a second letter was sent.

From February to May of 2004, there were a series of attacks incurring no casualties perpetrated by unknown individuals for unknown reasons: Senate majority leader Bill Frist was sent a letter filled with ricin; a fertilizer bomb was placed outside a county courthouse in California; anthrax was discovered at a mail facility in Virginia; and two homemade grenades were thrown at a building housing the British Consulate in New York City. In May, an arson attack committed by Arthur Gladd and Robert Hurley makes the list although it had no discernible political motive. There are no attacks on government targets from here through to October of 2007, when two hand grenades were thrown at the Mexican Consulate in New York City, once again by unknown perpetrators, once again claiming no damage other than that done to the building’s windows. Again, there are no attacks until February of 2009, when a bomb attached to the car of the head of the Arkansas panel that licenses doctors exploded in his driveway, injuring one person.

Finally, in February of 2010, a real “terrorist” attack takes place when Joseph Stack committed suicide crashing his plane into the IRS building in Austin, Texas, killing the IRS manager Vernon Hunter and injuring 15. Was Stack a “right–wing extremist”, then? Here are a few collections of statements from his suicide letter: “Why is it that a handful of thugs and plunderers can commit unthinkable atrocities (and in the case of the GM executives, for scores of years) and when it’s time for their gravy train to crash under the weight of their gluttony and overwhelming stupidity, the force of the full federal government has no difficulty coming to their aid within days if not hours? Yet at the same time, the joke we call the American medical system, including the drug and insurance companies, are murdering tens of thousands of people a year and stealing from the corpses and victims they cripple, and this country’s leaders don’t see this as important as bailing out a few of their vile, rich cronies. Yet, the political “representatives” (thieves, liars, and self-serving scumbags is far more accurate) have endless time to sit around for year after year and debate the state of the “terrible health care problem”. It’s clear they see no crisis as long as the dead people don’t get in the way of their corporate profits rolling in. … The communist creed: From each according to his ability, to each according to his need. The capitalist creed: From each according to his gullibility, to each according to his greed.”

No, Stack was not a “right–wing extremist,” and his suicide attack was obviously motivated far more by direct personal grievances over the fact that IRS policies had sent his entire life through a series of setbacks than it was about political ideology per se.

In March of 2010, John Patrick Bedell approached the entrance to the Pentagon in Virginia and fired at two Pentagon police officers, non–critically injuring them. Bedell was a libertarian who advocated a monetary system based on the value of a gram of marijuana.

In November of that year, Yonathan Melaku, a naturalized American citizen and Marine Corps Reserve Lance Corporal originally from Ethiopia, was arrested for a series of shootings originally thought unrelated. He was found with a notebook containing references to the Taliban and Bin Laden, and was eventually diagnosed with schizophrenia.

In January of 2011, two packages were sent to the offices of Maryland Governor Martin O’Malley and Transportation Secretary Beverly Swaim–Staley protesting the appearance of street signs urging motorists to report suspicious activity, burning one employee’s fingers. (The data lists these two packages as two separate events because they were sent to two separate locations, and if we count them as one, we’re down to 31 total attacks on government targets). That same month, an unclaimed envelope addressed to Homeland Security Secretary Janet Napolitano ignited at a postal sorting facility.

There are then no further attacks on government targets until November of 2012, when Iraqi civilian Abdullatif Ali Aldosary set off a homemade explosive outside a social security building in Arizona. Later, an unclaimed explosive device went off outside the Tacoma Community Justice Center building in Washington state. Actress Shannon Richardson was arrested in July of 2014 for sending a series of ricin–laced letters to President Barack Obama during 2013 (which were all, of course, intercepted before they ever came anywhere close to the President). The data counts her letters as two separate incidents, and if we count them as one, we’re down even further to 30 unique attacks across this whole time period. In November of 2013, Paul Ciancia killed one TSA agent in a politically motivated attack against the TSA, and was found with a note containing references to New World Order conspiracy theories. David Patterson and Larry McQuilliams both appear here again from our anti–police attack list (for the same attacks) in May and November of 2014 (McQuilliams’ one attack shows up twice, which again brings our total down to 29 attacks if we count them as one—or down to 27 if we remove both of them, since we’ve already counted them).

Finally, there are two remaining attacks in 2014. In June, Dennis Marx—associated with the Sovereign Citizens movement—opened fire on a county courthouse in Georgia, accruing no casualties. And in September, Eric King—a left–wing vegan anarchist—threw two Molotov cocktails into the office of U.S. Congressman Emanuel Cleaver “in solidarity with Ferguson, Mo” and to “memorialize those who died in Chile under the reign of a U.S.–backed dictator and lives lost in the Middle East, Afghanistan, Pakistan and Yemen.”

Thus, of 27 unique attacks on government targets not already counted, the vast majority were unclaimed and incurred no casualties, and several were conducted out of a personal vengeance of interest of some kind or another, rather than out of any explicitly political motivation. Of those with a known political motivation, Luke Helder’s don’t fit neatly into any ordinary political division, but are clearly more left–wing than right–wing. Preston Lit’s pipe bombing was committed along with the message to “Free Palestine” and references to al–Qaeda, but likely had more to do with mental illness than politics. Steve Kim’s attack had something to do with human rights in North Korea. Joseph Stack criticized both “corporate profits” and politicians; both the communist and capitalist “creed”. John Patrick Bedell was about as stereotypical as a libertarian can possibly get, literally advocating for currency based on the value of a gram of marijuana. Yonathan Melaku was fascinated with al–Qaeda, although his attack was likely spurred in large part by his schizophrenia. Shannon Richardson opposed gun control policies. Paul Ciancia believed in “New World Order” conspiracy theories. Dennis Marx was a member of the Sovereign Citizens movement, and Eric King was a left–wing vegan anarchist.

Now we should add to this list the six attacks on military targets taking place across the same period of time. In March of 2003, the Earth Liberation Front vandalized government trucks and set one truck on fire, leaving behind the spray–painted message to “Leave Iraq.” Later that month, Eid Elwirelwir, a Venezuelan–born Muslim U.S. citizen, crashed into the barricade gate of an air force reserve base in California, claiming that he “supports Saddam Hussein’s right to use weapons of mass destruction if invaded”. An attack whose perpetrator and cause was unknown took place in March of 2008. In June of 2009, Abdulhakim Muhammad  shot two soldiers outside a recruiting center in Little Rock, Arkansas, killing one of them, and declaring in letters that “Far as Al-Qaeda in the Arabian Peninsula … yes, I’m affiliated with them. … Our goal is to rid the Islamic world of idols and idolaters, paganism and pagans, infidelity and infidels, hypocrisy and hypocrites, apostasy and apostates, democracy and democrats, and relaunch the Islamic caliphate … and to establish Islamic law (Shari’ah).” In November of 2009, of course, Nidal Hassan opened fire on fellow soldiers in Fort Hood, Texas, killing 13 and injuring 32. The final member of the list of Yonathan Melaku, who we’ve already counted.

So: of all attacks on government or military targets of any kind listed in which the identity of the attacker is known and there was any reason to believe there was actually a political motivation, a grand total of four can even be loosely said to have been committed by “right–wing extremists” (assuming very generously that we can count Joseph Stack who criticized both “corporate profits” and politicians, and both the communist and capitalist “creed”; and Paul Ciancia who apparently believed in “New World Order” conspiracy theories as “right–wing extremists” — the unambiguous cases are John Patrick Bedell and Dennis Marx). The total death count between all four of them? Dennis Marx: 0 (1 injured); Paul Ciancia: 1 (4 wounded); John Patrick Bedell: 0 (2 wounded); Joseph Stack: 1 (15 injured). Including only the clear cases of actual “right–wing extremism” brings this tally to 1 death and 7 injuries; generously stretching the definition of “right–wing extremism” to include Stack brings it to 2 deaths and 22 injuries.

That death count is outdone more than six times over by Nidal Hassan alone, even if we include Stack (which we shouldn’t, because he was not a “right–wing extremist” in any way, shape, or form).

But let’s get back to the New America Foundation’s data.

Their list of “deadly right–wing attacks” counts 18 attacks, for a total of 48 persons killed. How many of these counts are valid? Defining “jihadist” violence is usually quite clear—when Tashfeen Malik pledges allegiance to ISIS on social media before a shooting spree, there’s no question that that attack was motivated by her interpretation of Islam and allegiance to a specific terrorist group espousing that interpretation. But defining when someone’s views count as “conservative” is a little murkier, as we saw above in the discussion of people like Joseph Stark, whose suicide letter criticizes politicians, sure—but criticizes them for failing to reign in “corporate profits”, and ends with “The communist creed: From each according to his ability, to each according to his need. The capitalist creed: From each according to his gullibility, to each according to his greed.” And indeed, the New America Foundation’s list includes Joseph Stark as a supposed “right–wing extremist”—which is immediately suspect.

Going through them in chronological order, then, they list Wade and Christopher Lay, who wanted to avenge those in the federal government responsible for the siege on the Branch Davidians at Waco; Jim David Adkisson, who attacked a Unitarian Universalist church and whose manifesto explicitly labeled the attack a “hate crime”, saying “I hate the damn left–wing liberals”; and Keith Luke, a confessed Neo–Nazi.

Next on the list, however, are Albert Gaxiola, Shawna Forde, and Joshua Bush. The  entry tells us that they “killed a man and his nine-year old daughter during an armed robbery of the man’s house … to help fund their anti-immigrant organization.” However, digging even slightly deeper quickly reveals that “Gaxiola wanted Flores dead because he was a rival drug smuggler”. Including these three in a list of “right–wing extremism”, then, makes about as much logical sense as saying that people who wish for an end to the War on Drugs are more likely to kill you by putting members of Mexican cartels who kill their white drug–dealing competitors into the analysis, as if this could possibly tell you anything about liberals or libertarians who wish for an end to the drug war. The problem with that wouldn’t be that violent members of Mexican cartels are a minority of those who call for an end to the drug war—it would be that their violence is about self–interest, not politics. Were a cartel member to kill the head of the DEA to send a message, that would count as political terrorism from someone calling for an end to the drug war.

After that, we have Scott Roeder, the member of the Sovereign Citizens movement who assassinated abortion doctor George Tiller; followed by James Von Brunn, a neo–Nazi who killed a security guard at the U.S. Holocaust Museum. But after those two, we’re already back to yet another questionable inclusion. While Robert Poplawski was a member of Stormfront, “believed in conspiracy theories that the Jews were behind an imminent collapse of the United States”, and “was busy preparing for the violent collapse of society”, and (to be clear) he deserves no sympathy for these views, the actual “terror attack” happened when Poplawski’s mother called 911 to have him removed after a dispute that was apparently over a dog urinating in the house. Recently unemployed, and panicked at the idea of being thrown out of his home, Poplawski—who already had a reputation for getting into fights with neighbors—fired at and killed the arriving police.

Poplawski had repulsive political views, and it goes without saying that he committed a horrendous action—he killed in cold blood innocent police officers who did nothing more than arrive at his household in response to a domestic disturbance call. But the action was not one he committed in any sense whatsoever in order to propagate his ideological views—it was not “terrorism.” After four hours, Poplawski surrendered, saying “I don’t want to end any more officers’ lives … I’m not going to shoot any more innocent officers. … You know, I’m a good kid, officer … This is really an unfortunate occurrence, sir.”

Next on the list is, of course, Joseph Stack—who as we’ve discussed more than once now already, was nowhere close to “right–wing extremist.” After Stack, Raymond Peake appears on the list for killing a man at a gun show and stealing his rifle. He qualifies to appear on this list apparently because at one point he told one investigator that he “stole the weapon for use in an organization seeking the overthrow of the American government that he refused to name”—an organization that we have no evidence outside of this one statement even exists. Even Timothy Lively, the detective on the case, didn’t buy the story, stating that he “believed the gun theft was the motive for the killing.”

That’s four out of nine cases so far—almost a quarter of the total—representing 7 out of 14 total deaths spanning from May of 2004 to October of 2010 that either were not acts of “terrorism” or were committed by people whose views were not “right–wing” at all.

The rest of the list fares better. The FEAR Militia killed a soldier who knew of their plans to target political figures; neo–Nazi Michael Wade Page opened fire at a Sikh temple; seven members of the Sovereign Citizens movement were indicted for ambushing a police officer who was investigating their activity; David Pederson and Holly Grigsby killed four people in association with a white supremacist criminal enterprise; Eric Frein has already been discussed; Glenn Cross, member of the KKK, killed three people at a Jewish center; Jerad and Amanda Miller, as discussed, were libertarian fans of “Taxation is Theft” and “Cop Block”; Dylann Roof was inspired by racist websites after discovering facts about the relative rates of interracial crime; and finally, Robert Dear killed three people outside a Planned Parenthood in Colorado.

If we subtract the four cases which were either not “terrorism” or not committed by “right–wing extremists”, we come to a tally of 14 “right–wing extremist” attacks representing 41 deaths, in comparison to 9 “jihadist” attacks representing 45 deaths. So what a proper count of this list actually shows is that Islamic terrorists are 1.1 times more likely to kill you than a “right–wing extremist”, not that they are seven times less.

Plugging that back in to our earlier numbers, that means that if the Muslim 0.9% of the population were to become equal in number to the white 63% of the population, they would commit 77 times more terrorism than the white 63% of the population currently does. But even this is still a slight underestimate, because the white percentage of the population has gradually been shrinking, and 63% is a new low that has been reached only recently—most of these acts were committed when the white population was closer to 75% of the U.S. total.

Not to mention the fact that this list doesn’t include foiled plots, which therefore exactly begs the question of whether the amount of focus we place on Islamic terrorism is justified or not.

If Islamic terrorism is only as low as it is because we’ve paid a greater amount of attention to it and therefore stopped a larger number of Islamic attacks, then it is absolutely rubbish reasoning to point to the resulting number of successful Islamic attacks as evidence that our focus on Islamic terrorism has been misplaced.

In Wikipedia’s list of foiled Islamic terror plots just during Obama’s first term in office from 2008 to 2012, we see 13 names: “James Cromitie et al., Najibullah Zazi et al., Michael Finton, Hosam Maher and Husein Smadi, Colleen LaRose et al., Abdul Farouk Abdulmutallab, Faisal Shahzad, Farooque Ahmed, Rezwan Ferdaus, Sami Osmakac, Amine El Khalifi, Quazi Mohammad Rezwanul, Ahsan Nafis.” As far as the “white–sounding names” on this list, James Cromitie (also known as Abdul Rahman) was a member of a group composed of four Muslim men, three African–Americans, and one Haitian immigrant who planned to plant bombs in two synagogues and fire missiles at airplanes leaving the Air National Guard base in New York; Michael Finton (also known as Talib Islam) who attempted to bomb the Paul Findley Federal Building and a Congressman’s office, considered Nidal Hassan and Anwar Awlaki heroes according to his MySpace page; and Colleen LaRose (also known as Fatima LaRose) was a Muslim convert convicted of—amongst other things—plotting to kill the Swedish artist Lars Vilks for drawing a cartoon depicting Muhammad.

paper published in November of 2010, “The Plots that Failed: Intelligence Lessons Learned from Unsuccessful Terrorist Attacks Against the United States”, created the most thorough known compilation of all 176 terror plots that have taken place over the last 25 years, and found that “about 75 percent of the plots are associated with radical Islamists and about 25 percent are from right-wing domestic, anti-government militia movements.” That’s 132 out of 176 attacks associated with Islam even when we’re including the pre–911 era—or about 7 foiled Islamic terror attacks each year compared to just 1.8 foiled “right–wing” attacks each year. In other words, the disparity between foiled “jihadist” and “right–wing” attacks is obviously much higher than the disparity found within successful attacks. 

In the 14 years since 911, there have been (according to this list) 67 foiled Islamic terror plots. According to Erik Dahl’s comprehensive data, if these 14 years were average, there will have been about 28 foiled “right–wing extremist” attacks. How different would the data in the New America Foundation’s list have looked if instead of 9 deadly “jihadist” attacks and 14 deadly “right–wing extremist attacks”, we were looking at 76 “jihadist” attacks with the potential to turn deadly and only 42 “right–wing extremist” attacks with a similar potential?

To revise our earlier calculations one last time,

Per that number we would have the 0.9% of the population that is Muslim committing 1.8 times as many terror attacks as the 63% of the population that is white (or the 30% that is both white and conservative). At that rate, Muslims commit 126 times as many terror attacks per capita as the white population currently does.

ThinkProgress’ claim that “UNC Professor Charles Kurzman and Duke Professor David Schanzer explained last June in the New York Times [that] Islam-inspired terror attacks “accounted for 50 fatalities over the past 13 and a half years” [whereas] “right-wing extremists averaged 337 attacks per year in the decade after 9/11, causing a total of 254 fatalities” leads here, and the paragraph in that article quotes as the source for the claim “a study by Arie Perliger, a professor at the United States Military Academy’s Combating Terrorism Center.”

Arie Perliger explains on pp.85–86 of this report that “The dataset includes violence against human targets as well as property … based on … relevant information drawn from … the SPLC hate crime dataset.”

It is therefore immediately obvious that we are not comparing apples and oranges here, and it will surely come as a surprise to readers of “ThinkProgress” that white people do not even commit a disproportionate number of officially designated hate crimes. The most recent data we have is from 2013, when Hispanic perpetrators of crime were still being classified as “white”—and according to that data, “whites” (which means whites and Hispanics) committed 52% of all hate crimes in 2013—even though whites and Hispanics together in 2013 represented 77% of the U.S. population. If whites (and Hispanics) committed the same rate of hate crimes as all other groups, we would expect them to commit 77% of all hate crimes—but instead, they commit significantly less than that. Meanwhile, the report tells us that 24% of all hate crimes in the United States in 2013 were committed by people identifying as black or African–American—even though African–Americans represent just 13% of the U.S. population.

I’ll leave aside the blatant dishonesty of comparing a tally of organized, deadly jihadist attacks with organized right–wing attacks and petty hate–crimes while hiding the actual source of this data in another article without referring back to it directly. Even if you include hate crimes in this analysis, neither the face of terrorism nor hate crimes in the United States turn out to be white.

Sorry, “ThinkProgress”.





A Note to My Calculation in “Are African–Americans Disproportionately Victimized by Police?”

• Only one truly plausible critique has so far been presented to me so far of the the argument in my essay, “Are African–Americans Disproportionately Victimized by Police?”, where I explain that:

(1) The violent crime rate presents us with an effective way of estimating how frequently police are having justified encounters with individuals from varying racial demographic groups. If we want to ask whether police treat individuals who belong to some racial demographic groups whom they encounter differently from others, it simply doesn’t matter in the slightest what racial percentage of the general population those individuals represent—for exactly the same reason that it does not matter in the slightest for the purpose of this analysis that 22% of the world population is black whereas only 5% of the world population is white. What matters is how many members of these racial demographic groups police are encountering on a regular basis. And the answer to the question, “Who is committing the violent crime?” tells us not only how often police are likely to be encountering members of different racial demographic groups, but how often they are justified to by the fact that addressing violent crime is the primary job and purpose of police. 

(2) We are, thankfully, not reliant on police arrest data alone to determine how frequently members of various racial demographic groups are committing violent crimes. In addition to that data, we have a federal collection of eye–witness data from both victims and third–party witnesses stretching back for decades in the form of the National Crime Victimization Survey (NCVS) collected and available online at the National Archive of Criminal Justice Data (NACJD)—and this data reveals that victims and witnesses have recorded a higher percentage of black perpetrators committing violent crimes than are found in police arrest data consistently for years. It therefore turns out that the disparity in arrest rates understates how disproportionately black perpetrators are responsible for violent crimes. Even though African–Americans are just 13% of the population, approximately a full 50% of the United States’ murders are committed by perpetrators who are black. And since most of these are committed by younger black males, who make up less than 6.5% of the population, the disparity is even worse than it seems. Whether this is, as liberals would have us believe, solely because of the influence of poverty or not, the reason why this disparity exists is simply irrelevant to the analysis itself. At least part of the explanation, in any case, is neither particularly “conservative” nor “liberal” in its implications—most violent crimes are committed by males between the ages of 18–25, and the African–American population in the United States skews much younger than others do, which means males in the crime–prone age demographic make up a larger percentage of the African–American population than they do of other racial demographic groups.

(3) When we compare the per capita rate of violent offenses committed against the per capita rate at which police  shoot and kill suspects of differing demographic groups (whether these shootings are justified or unjustified), not only does the disparity between African–Americans’ representation of the population and their representation in such shooting deaths disappear, but in fact it turns out that the trend reverses to the point that per justified encounter with police, African–Americans are in fact less likely to be shot by a police either during an attempt to commit a crime or while being legitimately suspected of one (because a suspect is known to be black). In other words, African–Americans commit a substantially larger number of violent crimes, and have a substantially higher number of valid encounters with police, before any one African–American ever ends up shot by police—while more white and even Hispanic suspects will be shot before they have committed nearly as many crimes or had nearly as many justified encounters with policemen approaching them during, or questioning them about violent crimes.

(4) Not only does that hold true in the extant national numbers, but it turns out that that finding actually also confirms what we have found in the most thorough experimental study data collected so far: police hesitate more before shooting black suspects in experimental trials which actually control rigorously for individual behavior by ensuring that the only difference between the virtual “suspects” these police have to make a decision to either fire or not fire at is race (unlike the real world, where this behavior absolutely inevitably differs on average).

Now, there is in fact some validity to the critique that I chose a questionable data–set for my argument in point (3), when I selected data regarding how many members of different racial groups are shot and killed by police, even if the data I selected for point (2) regarding how often members of different racial groups commit violent crime was correct and accurate. It is trueI chose numbers at this stage of the argument that vastly underestimate the number of police shootings that take place each year. However, this still does not ultimately weaken my final argument—it strengthens it. I should have clarified up front in the original essay what my reasons were for choosing to use the CDC’s data on what it calls “death by legal intervention for this stage of the argument even though it clearly underestimates the total number of shootings each year.

The reason I chose this particular data–set is: (a) because I knew that my argument was strong enough that any existing data–set that can be plugged into it will produce the same bottom–line conclusion; quite simply, by no estimate anyone has made is the statistically disproportionate rate at which African–Americans are shot by police anywhere close to the disproportionate rate at which African–Americans are responsible for violent crime; and (b) because what matters for this stage of the argument is not the raw numbers, but rather the relative percentages of black, white, and other individuals found in its data—and the CDC data actually identifies a relatively large disproportion towards black “deaths by legal intervention” even compared to more accurate data.

In short, whereas the study of CDC data I used for my original argument found that blacks were 34% of deaths by “legal intervention”, the more accurate and recent data only places this relative number at 21%. (Update: I’m confused by what is going on in this data now. They claim that 21% of their 1600 victims are black whereas 32% were white, but this would mean 336 blacks versus 512 whites (to which we’d also have to add Hispanics), and that isn’t what I count when I count the entries they include under race myself; I count 390 blacks in 2013 alone, and 597 whites in that same year. To be as cautious as absolutely possible, I have removed the previous numbers—which found that whites+Hispanics were 175% as likely to be shot in a given encounter with police as blacks—and replaced them with my own count, even though I’m not sure why the two differ, and it could be that their summary is accurate and that my count is not. In any case, also bear in mind that this new estimate—which still does support my argument, anyway—is biased downwards by the inclusion of Hispanics, who have a higher per capita crime rate than whites.)

(Old paragraph: Substituting more accurate data, then, will actually lessen the relative percentage of black compared to white “deaths by legal intervention”, even if it increases both absolute numbers. And it thereby renders the point actually made in my conclusion even more solid—the data I originally chose artificially strengthened the liberal narrative that I was arguing against; not my own conclusions. For the original essay, I checked this against more thorough data before ever publishing what I had written; but during my early investigations into the question, I had typed out what happens to the CDC data in that calculation simply for my own benefit—and once I saw that the same conclusion is reached no matter what data we use, I went ahead and published what I had already had conveniently typed out anyway. It was a mistake to do this without explaining what happened thoroughly.)

To rectify this mistake now, I’m going to take the highest unofficial estimates of how many police shootings, whether justified or unjustified, take place each year that exists anywhere and re–run the original analysis I made in the third step of that argument to show that the same conclusion is still so thoroughly established that it would take an epidemic of black individuals shot by police invisible to all anecdotal, media, or official reports that exist anywhere in order to reverse the trend I’ve identified. In other words, an actual conspiracy theory.

The highest estimate for those numbers that exists anywhere, the data informally collected at the Killed By Police database, brings it to about 1,100 per year, or 2.6 police shootings each day. To get the years’ end tally of how many of these individuals belonged to which racial category, I pulled compiled data from an analysis here.

Note that everything that follows will use 2013–2014 numbers, and recall that in order to fit this data into my analysis, we’re going to have to lump Hispanics killed by police in with “whites” killed by police. Why? Because until some time in the middle of 2015, the federal data on crime offending classified Hispanic perpetrators as white. If we compare the rate of “white” victims of police shootings to the rate of crimes committed by “whites+Hispanics”, then, our numbers will compare apples and oranges; to make the comparison meaningful, our only choice is to compare the rate of “white+Hispanic” victims of police shootings with the rate of “white+Hispanic” violent crimes. Again, as I demonstrate in the original essay, this actually once again turns out to strengthen my analysis in the end, because it inflates the actual white per capita rate of crime. In the meantime, I’ll use the phrase “non–black” to refer to this combined “white+Hispanic” number. So while the report identifies 597 white victims and 390 black victims in its tally, we must add the 251 Hispanic to the white tally to obtain the appropriate numbers for the purposes of this calculation:390 black to 848 “non–black” persons shot by police.

2013 black deaths: 390 (30% of total)
2013 white deaths: 597 (50% of total)
2013 Hispanic deaths: 251 (19.7% of total)
2013 “white” deaths: 848 (67% of total)
2013 total known race: 1271

And with that, we can finally plug these numbers back in to my original calculation:

390 black deaths out of a black population of 37,685,848 equals 0.0000010349% of that total population shot by police, or 1.03 deaths per 100,000 black individuals.

848 non–black deaths out of a non–black population of 271,059,650 equals 0.00000031284% of that total population shot by police, or 0.3128 deaths per 100,000 “non–black” individuals.

Using the official arrest data for 2013 collected by the Bureau of Justice Statistics (which we have established underestimates the rate at which black perpetrators are disproportionately responsible for violent crime, according to victim and witness reports spanning across decades), the 2013 violent crime rate per 100,000 people for black individuals is 465.7, while the 2013 violent crime rate per 100,000 people for “non–black” individuals is 122.7.

Thus, as before, dividing the second number by the first number gives us the rate of police shootings per violent crime committed, and therefore of police shootings per police encounter with a member of that racial demographic group justified by violent crimes committed by that group.

The “black” rate is therefore 465.7 divided by 1.035, or 449.95.

The “non–black” rate is 122.7 divided by .3128, or 392.26.

In other words, black individuals will commit about 450 violent crimes before any one black individual will end up shot by police—whereas “non–black” individuals will commit about 392 violent crimes before any one “non–black” individual will end up shot by police. In other words, police will encounter 15% more violent or legitimately suspected black individuals before shooting one—and conversely encounter 13% fewer violent or legitimately suspected “non–black” individuals before shooting one. The “non–black” suspects end up getting shot faster.

Reducing those numbers down to make them more comprehensible, the proportions are as if police shot one out of every seven black individuals they encounter in the line of duty, but one out of every six “non–black” individuals (6:7 is the smallest possible 15% increase, so this makes the situation of blacks and “non–blacks” with respect to each other easier to visualize even though police shootings obviously do not happen at this frequency.) Once violence is accounted for, the numbers are in blacks’ favor—not against it.

To arrive at these same numbers by a different means, the likelihood that a black individual committing a crime will end up shot by police is 1/450, or 0.0022, while the likelihood that a “non–black” individual committing a crime will end up shot by police is 1/694, or 0.0025. The likelihood that a “non–black” individual legitimately suspected of committing a crime will end up shot by police is therefore an additional 0.0003 larger than the likelihood that a black individual legitimately suspected of committing a crime will—and since that 0.0003 increase is 15% of the original baseline value of 0.0022 for the likelihood that a black individual legitimately suspected of committing a crime will be shot by police, this again means that non–black individuals who encounter police are 15% more likely to be shot during the course of that interaction.

Black individuals end up shot by police out of proportion to their population rate, then, because black individuals commit violent crimes out of all proportion to their population rate, and because police therefore end up in a disproportionate number of encounters with black individuals for perfectly justified reasons; not because police are more likely to shoot any given black individual they come into contact with due to subliminal racism—because in fact, once again, the data once properly controlled for actually demonstrates exactly the opposite.

On a related note,

Another set of data I’ve presented has been critiqued on very similar grounds—and once again, while there is validity to this critique, correcting it only serves to strengthen my case. In the fourth entry to the “Is Dylann Roof “White Like Me?”” series, I wrote: “It’s worth making a comparison of the relative rates of police brutality and black–on–white violence in the United States to try to put things in perspective. According to the FBI, there were an average of 14,545 murders per year across the years of  2011–2013. which comes out to an average just shy of 40 murders per day. Since African–Americans commit approximately half of those, and pick white victims about 1/5th of the time, that means there are about four black–on–white murders every day in the United States. White perpetrators commit the other half of murders in the United States, but only choose black victims about 2.4% of the time—which means there is slightly less than one white–on–black murder in the United States every two days.

According to data that does not take statistics reported by police departments for granted, but in fact calls them into question, based on data from the early months of 2015, police kill approximately 2.6 subjects per day—approximately half of which are black, which brings the number down to 1.3 police shootings of black suspects per day. Of this number, it is unclear how many are justified or unjustified. According to the FBI, in 2013 police were attacked by someone carrying a weapon roughly 10,000 times—2,200 of those times with a firearm. If police kill 2.6 suspects per day every day for a year, that’s still only about 1000 total killings at the end of the year. Some liberal readers may point to gaps in the data (call it the “racism of the gaps” strategy) and insist on disagreeing, but if police are killing suspects far less frequently than they’re being attacked by them, it seems safe to me to bet that the vast majority of those killings are probably justified.

However, even if we assume that every single one of them was unjustifiedcombining the number of police shootings of black suspects per day (1.3) with the number of white murders of black victims per day (0.48) would still give us a smaller number (~1.8) than the number of black murders of white victims every day in the United States (~4). More than twice as many black murderers are choosing white victims as the number of white murderers choosing black victims and the number of police shooting black suspects (justified or not) combined. (Meanwhile, there are 16 black murders of black victims every single day across the United States—more than eight times the number of white civilian murders and justified or unjustified police shootings of black victims combined.)”

(I continued: “However, both of these statistics really still need to be taken account of in terms of the wider context that murder only accounts for 0.6% of the deaths in the United States in general. While there are approximately 40 murders, 4 of which are black–on–white, on a typical day in the United States, on the same day 90 Americans will die in car crashes, 110 will commit suicide, 120 will overdose on drugs, 256 will die in accidental falls or other accidents, 1580 will die of cancer and more than 1600 will die of heart attacks. If Roof is concerned about “saving the white race,” then Burger King, cigarettes, drunk driving, wobbly ladders and clinical depression are far more formidable foes than black criminals. But what goes for Roof’s underlying logic goes for “#blacklivesmatter,” too. Tim Wise is right that it’s only a tiny fraction of the black population who commits an act of violence in any given year—the only problem with that is the hypocritical inconsistency we can well know to expect should anyone say the same about racist attacks against black Americans, whether committed by civilians or police, which even combined are still yet only half the size of the fraction of black citizens committing acts of violence Wise himself has just called “tiny.” Whatever goes for the relative insignificance of disproportionate black–on–white violence goes at least twice as much for both white–on–black and police–on–black violence combined. And it goes even more so for hysteria about mass shootings, which make up only 0.2% of that 0.6% of deaths in America.”)

In fact, these numbers overestimated what percentage of police shootings are of black suspects—once again in order to artificially strengthen the claims I am presenting arguments against. Where I granted the assumption that 50% of the 1,100 yearly (or 2.6 daily) police shootings are of black suspects, in fact, as detailed above, the actual range falls somewhere between 21–34% (and closer to the lower of those numbers in more accurate, and recent, sets of data)—rather than 550 black suspects shot by police each year, we’re actually talking about fewer than 150.

Likewise, when I quoted these paragraphs on Reddit, the top criticism claimed that my implications must be incorrect because I’ve portrayed the situation as if only black and white individuals commit murder. But once again, I arrived at these numbers by starting from the established knowledge that black perpetrators commit half of the national total and then lumping all the rest of the murders in as “white” in order once again to artificially strengthen the case I was critiquing. Not only are there apparently 72% fewer total shootings of black suspects than I assumed in this calculation—instead of 4 police shootings of black suspects every three days, the most damning data suggests an average of only 2 police shootings of black suspects every five days—but every murder committed by an East or South Asian or member of any other non–white, non–black ethnic group subtracts from the white number of murders of either white or black victims which I assume, not from the black number. My simplification of the numbers drastically strengthens the left–wing case which I have attacked; they do not distort the facts in my favor—and even while granting the left–wing case this many assumptions, including that every single one of those 1.3 daily police shootings (which the Killed By Police dataset actually places at 0.4) was unjustified, with not one of them ever justified by self–defense at all—the numbers still result in the conclusion that “More than twice as many black murderers are choosing white victims as the number of white murderers choosing black victims and the number of police shooting black suspects (justified or not) combined.”

In truth, the shorthand inaccuracies in my argument actually understate the strength of its conclusion—the reality of the situation will be even more strikingly skewed in this direction than this.