top of page
Search

Did I choose to write this article?

  • candicesavary1
  • Sep 19, 2022
  • 14 min read

Updated: Aug 12

Free will


Psychology, as a discipline, has the unique capacity to investigate human phenomena often considered too complex or abstract for traditional scientific inquiry. Psychologists have joined in to wrestle with the age-old philosophical puzzle that is free will. Although considered beyond the reach of empiricism; we will explore how it could be drawn into psychology and neuroscience.


What does it mean to choose? To act? To be held responsible?


 Typically, we imagine ourselves as sovereign agents who are masters of our intentions, in control of our decisions. But what does it actually mean to have free will? And can such a question be approached through a scientific lens using empirical data, neuroscience, and biological insights?

Before looking at how psychology and biology have attempted to answer the free will question, it's worth asking: what do we really mean by free will, and what can biology have to do with it?


Free Will on Death Row 


One of the most interesting ways many of us think about free will is through our capacity for evil. When someone like us does something horrible, we twist and turn our philosophies and morals, working overtime to build a scenario where that person didn’t really have a choice, trying to find a  classic ‘I had to shoot him or he’d shoot me’. We manage to elegantly strip the ‘will’ out of the situation, take the blame out of it, just enough so we can feel a little better about the human race - keep things looking moral.

And so, this becomes one of the most popular ways we like to think about free will: When did I have a choice, and when did I not?

But lucky for you, this is a neuroscience article. So instead of talking about circumstances, we’re going to talk about biology (yay). What biological conditions might mean we don’t actually have a choice? That the will to decide just isn’t there? Is that even a thing?


Well, let’s start by looking at what does what in the brain. One brain region that shows up again and again when some wildly ambitious people try to marry science and morality is the amygdala. It’s a tiny, almond-shaped structure buried deep in your brain, famous for its role in fear: triggering fear responses, dealing with emotional reactions, and helping us recognise those emotions in others. So what happens when the amygdala doesn’t work?


Unfortunately for amygdala researchers, taking out little parts of the brain is illegal. But once in a while, nature does it for us. The problem is, when brain regions get damaged especially early on other parts often get damaged too, this ends up making it tricky to say ‘this part does that’.


But then there’s Patient SM — crowned best person to research by the amygdala people, because SM had both her amygdala completely missing by the time she reached adulthood. Unfortunately for SM it’s not just fearless fun. This is someone who is literally incapable of feeling fear or anxiety. She reads most facial expressions just fine, except for fearful or angry ones. Scary movies don’t scare her; she watches them with this odd, detached curiosity. Her kids talk about how she’ll casually pick up a snake on the side of the road, without the slightest hesitation. She doesn’t process fear, doesn’t recognize fearful faces, and can’t detect when someone’s stabbing her in the back in an economic game. Where most of us would think that bastard and change our strategy, she doesn’t. So the amygdala helps us do a lot of things, recognise danger, feel fear, and read social cues. It is clearly crucial to being a good guy in society right? 


The American death row is where the really bad guys in society go. The grim and unsettling place where the some states house their supposed worst: those who’ve committed acts so grotesque they warrant the ultimate punishment. Of course this horrible consequence operates with the notion that these individuals are fully responsible for their choices, that they had free will. But what if those “choices” weren’t really choices in the way we like to think?

A study of 636 death row inmates in California revealed that a huge proportion scored within the clinically psychopathic range, a diagnosis rooted in neurobiological dysfunction. These individuals often show severe deficits in empathy, remorse, and emotional regulation. Take some of these people in an fMRI scanner and notice that a little part of their brain is often impaired too, we see: a shrunken or underactive amygdalae. In some cases, the neural pathways connecting the amygdala to the prefrontal cortex (the region responsible for judgment, inhibition, and moral reasoning) are frayed or barely functional.


So, the part of the brain that helps most people pause before doing harm? For many of these inmates, it just doesn’t work the same way. This is not where we say all is well that ends well and forgive and forget, but rather where we consider that for some, regulating harmful impulses is far harder for them than it is for neurotypical individuals, and these things gone wrong in the brain, may give us a little insight into humans at their worst.

And so, we’re left with a thorny question: if someone’s brain is biologically ill-equipped to register fear or guilt, did they really choose to be cruel? Or were they simply given a less sensitive set of tools to detect when they should have made a different choice.

Many of these behaviours to biological concept mappings have led researchers to the question of: how much of what we call “choice” is actually just a byproduct of brain architecture which may be largely invisible to our conscious awareness?


As human behavior becomes increasingly explainable in biological terms, naturally the role of free will becomes a little harder to fit in. The death row case may be extreme, but this same biological logic weaves through everyday life. The genes we inherit code for proteins that shape our brains, regulate our hormones, and fine-tune our neurotransmitters, all of which help dictate how we feel, act, and respond.

In a more general sense: A spike in cortisol can turn a minor inconvenience into a full-blown crisis. An overreactive amygdala might interpret a passing glance as a threat (Phelps et al., 2004; Kuhlman et al., 2018). These predispositions don’t control us outright but they do stack the deck. And stacking the deck could nudge us toward certain behaviors, more than they nudge the next guy.


Way back when


Moving from neuroanatomy, we can look inward at the quiet instincts that shape us. How much of our everyday behaviour is truly chosen, and how much is inherited strategy?

One way some psychologists approach free will is by zooming out and looking at our history using the lens of evolutionary psychology. This perspective suggests that many of the behaviours, thoughts, and moral instincts we like to think of as freely chosen might actually have been carefully selected over millions of years. In this view, behaviour isn’t so much about conscious choice, but about evolved strategies, psychological adaptations shaped not by reflection, but by the goal that supposedly drives life : get our genes into the next generation.


At its root, evolution doesn’t care if something is meaningful, it cares if it works. We tend to think of this in terms of physical traits, like Darwin’s finches with beaks shaped to crush certain nuts. But the same idea applies to psychological traits too. If a behaviour helps get genes into the next generation, it sticks around. So a lot of what feels personal or profound parental love, romantic attachment, the need to fit in might actually just be practical. In evolutionary terms, they’re strategies. Wonderful, loving, kind (gene-spreading) strategies.

Take the altruism we reserve for family. That instinct to protect your siblings, help your parents, or support your kids when they need you. Evolutionary psychology suggests this isn’t just niceness; it’s strategy. From a biological point of view, helping close kin can be just as beneficial as reproducing yourself, because your relatives share much of your genetic material. This is the principle of kin selection: the idea that even our most selfless-seeming behaviours may, in fact, be clever tactics for keeping our genes in circulation.

The animal kingdom is full of fascinating examples of this, but one especially neat case comes from vampire bats. These bats often share their hard-earned blood meals (as one does) with others who didn’t manage to collect any. But they’re not just being randomly generous, they're statistically far more likely to share with those they’re genetically closer to: more with siblings than cousins, more with cousins than distant relatives, and more with relatives than strangers. In fact, the degree of altruism tracks almost perfectly with the degree of genetic relatedness.


Importantly, none of this needs to be conscious. Evolution doesn’t design behaviours with explicit intentions but it shapes motivational systems. What we feel as generosity or protectiveness is likely the result of emotional and cognitive structures sculpted by natural selection. With this lens, our intentions may feel authentic and self-driven but they’re mostly instrumental within a system designed for survival.


Prominent anti–free will advocate and neuroscientist Robert Sapolsky argues that every human behaviour, whether kind or cruel, impulsive or deliberate; can be traced to a chain of prior causes: neural activity seconds before, hormonal fluctuations hours before, formative experiences years earlier, cultural norms shaped over centuries, and evolutionary pressures stretching back millions of years. From this view, even our psychological lives, grounded in biology, are built on inherited genetic codes selected across generations. Within such a framework, the idea of an uncaused, freely chosen will becomes increasingly difficult to pin down.


However, as comprehensive and compelling this theory may appear, evolutionary psychology is not without its critics. For one, it’s not as if there's a specific gene for choosing to love your mother or for deciding on a particular career. Instead, a complex interplay of many genes may predispose certain tendencies, which only partially shape decisions. And it is not like we know exactly what those genes are and how much they contribute to any sorts of psychological phenomena. 


There are also methodological challenges. Behaviours don’t fossilise, and we can't observe how or why our ancestors acted in the distant past and then why a behaviour like being altruistic towards family may have been selectively better than others. We also can’t rewind time and return to early human societies to test whether certain behaviours truly conferred a selective advantage. In a similar vein, many evolutionary explanations rely on assumptions about ancient selection pressures, which may not be directly testable. This risks circular reasoning, using modern behaviour and environments we assume were always around, to infer ancestral environments and behaviours, which are then used to explain modern behaviour. Then we kind of get left with speculative narratives that sound plausible but lack empirical backing.


Finally, human behaviour is incredibly flexible. Culture, learning, and individual experience can reshape or even override evolved tendencies. People tend to not be mega fans of evolutionary psychology as it makes a good case for explaining a whole bunch of things, but leans heavily on biology, makes lots and lots of assumptions and could be seen to overlook the influence of context and lived experience.

In any case, this remains an interesting way to think about which traits are chosen and exercised freely as opposed to being the simple and mundane the sum of our evolutionary, genetic, and environmental inheritance.


To the Lab!


If all that biological rigour wasn’t enough trying to explain something like free will, the extra-extra ambitious have gone a step further and tried to search for the neural signature of this phenomena. In the 1980s, neuroscientist Benjamin Libet’s aim was to test whether the brain initiates actions before we become consciously aware of deciding to act. The experiment was simple. Participants were instructed to make a voluntary hand movement at any moment of their choosing, while EEG measured brain activity. He focused on the “readiness potential” (RP), a signal in the motor cortex known to come before any type of voluntary motor action.


What he found was interesting: the RP began several hundred milliseconds before participants reported the conscious intention to move. So, putting two and two together - if the part of the brain that precedes motor action gets there first, Libet argued, maybe free will is just our mind catching up: a post hoc rationalisation rather than a cause.

But like every really cool study from the 80s, Libet’s experiment has lots of methodological challenges that came its way. 


Timing the onset of conscious will based on self-report, within milliseconds, is inherently fraught. There's a subtle but critical difference between wanting to act, becoming aware of that intention, and reporting it. It's entirely possible that the motor action begins, and only afterward do we become consciously aware that we wanted to move. Libet’s experiment couldn’t rule out this sequence, because there may be a delay between the actual formation of an intention and our conscious awareness of it. The distinction between becoming aware of a decision and reporting it is subtle but really important when we are talking in such small timescales. This means that motor action may not have preceded intent, but rather conscious intent. 


Moreover, the act of moving one’s hand without purpose or goal is far removed from the kind of complex, morally or existentially significant decisions we face in everyday life where goals and opinions are involved. Even if Libet’s data suggests that motor activity starts ramping up before a person feels like they’ve decided to move, who's to say the same holds for bigger and messier choices that are weaved with meaning, morality and consequence? So, expanding this finding to morally or emotionally weighty decisions may be extrapolating too far in the constraints of this experiment’s context. 


Recognising these limitations, Patrick Haggard at UCL refined the model. His experiments introduced more contextual decisions: choosing between rewards, skipping boring trials: actions grounded in subjective reasoning. He, too, observed the RP. But intriguingly, he found that before voluntary actions, brain activity began to stabilise into a consistent pattern that sounded pretty cool to name as some sort of a 'neural signature' of intention . 

So Haggard posited the concept of an organised internal process that preceded action, not just random build-up, perhaps analogous to volition. And importantly, it was seen in the brain’s frontal regions, linked to planning and reasoning. So we could suggest from these findings that even if conscious awareness comes late to the party, the brain may still be “deliberating” in meaningful ways.


So in light of all of this, we’d argue that even if we decide to do something, there’s already something happening beneath the surface, something that pushes us toward that action before we’ve even realised we want it. That’s a pretty unsettling idea. It challenges the belief that we consciously choose to act, and instead suggests that consciousness might just help us understand what we’re doing, not necessarily decide it.


A bit furthur away but still close to the Lab!


So while this is literally taking free will under a microscope, other lab-based experiments hint more subtly at the influence of unconscious factors. Because even if we do manage to isolate a neural signature of decision-making, it’s hard to see how that would neatly map onto something as abstract and slippery as “free will.” You could argue that by confining these questions to brain scans and lab setups, we limit ourselves, we end up measuring when someone lifts a finger, instead of asking how they became the kind of person who decided to join the military. Maybe the more interesting approach is to keep the scientific mindset, but shift our focus to the moments when we feel most in control yet maybe aren’t.


One striking example comes from research on implicit bias. These studies use clever methods to reveal biases people don’t even realise they have. Take the racial Implicit Association Test (IAT). Participants may insist they’re not racist and happily list their diverse group of friends. But when they take the test, something different happens: they tend to pair positive words faster with white faces and negative words faster with Black faces. There’s a measurable delay, what researchers call a 'latency', when they have to associate positive words with Black faces or negative ones with white faces.


The idea is: when two concepts are linked in our minds like “Black person” and “dangerous” we categorize them faster because of this bias. When they’re not stereotypically linked like “white person” and “dangerous”, our brain takes longer in the absence of a bias. And this doesn’t just stay in the lab. For example, research on police officers found that those with higher demonstrated latency in their IAT scores were more likely to mistakenly shoot black suspects in simulation exercises.


Taking an evolutionary angle to explain this, these patterns seem tied to evolutionary tendencies, mechanisms for distinguishing between in-groups and out-groups. If we think back to our ancestral heritage, a really good way to distinguish quickly if the guy walking up to you was a dangerous, was to check out if he looked different to how you looked, and if he did, this often meant a stranger….a stranger…a potential threat. So it made sense to evolve protective instincts toward outsiders, instincts that, unfortunately, are argued to be the mechanisms behind certain types of biases today. Even more unsettling, put people under a brain scanner when they are looking at someone of a different race - the fear detection, ‘should I be scared’ amygdala activates just for a split second. 


Now, this all paints a pretty bleak picture of human nature. It suggests we might be hardwired to distrust, or even dislike, those who aren’t like us, and that these deeply embedded biases shape our actions and beliefs in ways we don’t consciously control. But the key point here isn’t that people are secretly prejudiced and hiding it, it’s that implicit biases may be at play ( and unconsciously) shaping how we see and respond to the world. Still, we can begin to wonder how free we really are in what we like to label choices and opinions.


Another way in which random stuff influences us in ways which we’ll never have any idea, comes from an even more weird and puzzling study by Adams et al. at the University of Arkansas. This group found that simply being exposed to a bad smell made people more likely to oppose things like gay marriage and premarital sex on a questionnaire. The environment you are in, down to what you smell could influence your moral judgments? This followed with some interesting neuroscientific backing. In 2021, researchers at the University of Bologna found that moral disgust activates the same neural pathways as actual, physical nausea. This points to the misinterpretation of an internal signal, which becomes assigned to a moral concept as opposed to a physical reaction, confusing physical disgust triggered by the environment with moral revulsion. So the line between something being wrong, and something being physically unpleasant might not be as solid as we think. Either way, it’s not a great look if your moral compass is taking cues from your gag reflex.


Of course, these studies aren’t directly about free will, but nevertheless raise serious questions about how much of the moral judgment we suppose personal, may shaped by stimuli that we barely know we register. If a smell can shift a political opinion, what else could be quietly nudging our choices?


A new angle (because we haven't had enough)


It’s worth considering that lot of the debate around free will these days focuses on understanding immediate intent. People ask: Are we autonomous in forming an opinion? Are we free in the intent? But none of these questions ask why or how someone ends up being the kind of person who forms a particular conscious intent in the first place. It may be much more interesting to think about how one’s circumstance both biologically and environmentally constraint, mould and morph people into a certain type of person, with a set of beliefs, which shape choices and therefore in the process how much this person has had the control over being shaped a certain way. 


As I lightly touched on earlier, this is asking things like: How did everything from your time as a fetus, to all the moments leading up to right now shape you into the kind of person who would shoot a gun or start up a non-profit?

And maybe whether you’re conscious of your intent or not doesn’t even matter all that much when years and years of experiences, experiences you had no control over, have shaped you into the kind of person who makes certain decisions in certain situations.

It’s interesting to notice too that , there’s a pattern as to how many neuroscientists and psychologists  end their lectures or books or panels on the topic of free will. They wonder : even if free will is an illusion, they’d rather not imagine a society that believes it. And this makes sense, after all, what motivation would you have to get out of bed in the morning if you truly believed nothing your actions and decisions are influenced and predetermined? And so what may it look like when we really believe that anything we ever do will actually influence anything? Well that is most common, in people with depression


When examining the principal drivers of depression, psychologist Martin Seligman’s research revealed that one of the most powerful predictors of depressive behaviour is the belief, often unfounded, that one has no control over their circumstances. Learned helplessness refers to a psychological state in which individuals, after repeated exposure to uncontrollable stressors, begin to believe that their actions are ineffective, even when opportunities for control are later available. This perception of powerlessness can lead to apathy, low motivation, and deep emotional distress and leave people as shells of themselves.

So through a subset of people, we can get a glimpse into  what happens when people genuinely feel they are not in control, when the notion of hope and agency vanish, free will seems psychologically protective.


So even if free will does not exist in the strictest, metaphysical sense, the belief in its presence appears to be psychologically beneficial. More than sustaining wellbeing, it seems to underpin behaviours associated with growth and progress: taking risks, pursuing opportunities, and making consequential decisions. These actions are often driven not by certainty, but by the conviction that our choices have influence. Whether or not this belief reflects an underlying reality, does that really matter? 


Rounding up all this evidence and these perspectives, perhaps this is the part where I write a nice little thought rounding everything up - and come to a conclusion. Unfortunately if you look at the amounts of question marks in this article, I too, have been left just as confused as you are.


























 
 
 

Recent Posts

See All
In the name of, Biology?

In today’s rather morally lax society, where countless voices and perspectives compete, it can actually become harder to form an opinion....

 
 
 

Comments


Post: Blog2_Post
  • Facebook
  • Twitter
  • LinkedIn

©2022 by The Science Behind. Proudly created with Wix.com

bottom of page