I arrived for my meeting with Professor Chambers at the pleasant Cardiff pub near his office where we’d agreed to have lunch. He was already sitting at the back of the room, and waved me a hello as I entered.
Professor Chris Chambers is a disarmingly laidback Australian in his late 30s. In what seemed to be a complete submission to cultural stereotypes, he was, at the time, wearing a T-shirt and baggy shorts (despite it raining outside). He is also completely bald, to a “shiny” extent. I’ve met several younger male professors now who have little to no hair on their heads. My theory is that their big powerful brains generate so much heat that it scorches the follicles from the inside.
Anyway, I decided to take the plunge and just say what I wanted from him: “Can I use one of your MRI scanners to scan my own brain while I’m happy, to see where happiness comes from in the brain?”
After about five minutes, he finally stopped laughing in my face. Even the most optimistic person would have to concede that this was not a good start. For the next hour or so, Professor Chambers explained to me, in detail, why my plan was ridiculous.
That’s not really how fMRI works, or how it should work. Back when fMRI was developed, back in the ’90s, what we call the “bad old days” of neuroimaging, there was a lot of what we called “Blobology”: putting people in scanners and hunting around for “blobs” of activity in the brain.
One of my favorite examples of this is from one of the very first conferences I went to; there was a study being presented called “The fMRI of Chess vs Rest.” Basically, you had people lying in a scanner, either playing chess, or doing nothing. The whole brain was active, but in different ways for the different scenarios, and in the chess scenario certain brain regions would show up as “more” active. From this, they then claimed these regions are responsible for the processes involved in chess. There was so much inverse inference applied: This part is active, and we do these things in chess, so that must be what those areas are for. It’s working backward. It’s viewing the brain like a car engine; the idea that each brain region must do one thing and one thing only.
This approach leads to these wrong conclusions; you see activity in a brain region and assign it a specific function. But it’s completely wrong. Multiple functions are subsumed by multiple areas, which are handled by cognitive networks. It’s very complicated. That’s a problem with neuroimaging generally; it goes up a notch further when you’re dealing with anything subjective, like happiness.
Despite my openly joining in laughing at the naïve fools who thought you could use an fMRI to find out where chess playing comes from in the brain, I was dying of embarrassment on the inside. I’d hoped to do something very similar myself. I was, to utilize a term I’d only just discovered, being a total blobologist.
Turns out, it’s one thing to use imaging tools to study something like vision; you can reliably control what your subjects see, and ensure each subject is presented with the same image to ensure consistency, and locate and study the visual cortex this way. But it’s a lot trickier to study what Professor Chambers terms “the interesting stuff”; the higher functions, such as emotions or self-control.
“The question is not ‘Where is happiness in the brain?’ That’s like asking ‘Where is the perception of the sound of a dog barking in the brain?’ The better question is ‘How does the brain support happiness? What networks and processes are used to give rise to it?’ ”
Doing an experiment with one person, like I was hoping to do, is essentially pointless in the scientific sense.
Professor Chambers also touched on another issue: What is happiness, in the technical sense? “What timescale are we talking about? Is it an immediate happiness, like ‘this pint is nice!’? Or is it long-term and general, like your children making you happy, or working toward a goal, achieving contentment in life, being calm and relaxed, things like that? You have several levels of functioning in the brain supporting all this, and how do you unpack that?”
By now, I’d abandoned all hope of doing my half-cocked idea for an experiment, and admitted as much. Professor Chambers, despite my earlier fears about the ferocity of professors confronted by inferior intellects, was very nice about the whole thing, and said he would normally be willing to let me go ahead with it even if only to provide a useful demonstration of the technique. Unfortunately, fMRIs are incredibly expensive to run and several research groups are always vying for their use. It would probably upset a lot of people if he wasted precious scanner time allowing a buffoon to probe his own cortex for happiness.
I considered offering to pay the costs myself, but they were just too high. Not all writers are J.K. Rowling, and as generous as my publicist Sophie is when it comes to processing expenses submitted to the publishers, even they would baulk at a claim like this. £48 for a train ticket, £5 for a sandwich, £3 for a coffee, £13,000 for a day of fMRI. I couldn’t see that slipping by the accounts department unnoticed.
Rather than just writing the meeting off as a lost cause, I decided to ask Professor Chambers if there were any other issues with the fMRI approach I should be wary of, before I attempted to rework my ideas to something more “feasible.”
It turned out Professor Chambers is a very keen and active individual when it comes to highlighting the issues and problems that afflict modern neuroimaging studies, and psychology in general. He’s even written a book, The Seven Deadly Sins of Psychology,1 all about how modern psychology could and should be improved.
There are several important issues about fMRI that clarified just how hard it would be for me to use it to set up an experiment to find happiness. Firstly, as stated, it’s expensive. So studies that utilize it tend to be relatively small, using a limited number of subjects. This is an issue, because the fewer subjects you use, the less certain you can be that your results are significant. The greater the number of subjects used, the greater the “statistical power”2 of any results, and the more confident you can be that they’re valid.
Consider rolling a dice. You roll it 20 times, and 25 percent of those times you roll a six. That’s five times you rolled a six. You might think that’s a bit unlikely, but still perfectly feasible. It wouldn’t seem significant. Now say if you rolled it 20,000 times, and 25 percent of those times you rolled a six. That’s rolling a six 5,000 times. Now that would seem weird. You’d probably conclude there’s something up with the dice, it must be rigged or loaded in some way. It’s the same with psychology experiments; getting the same effect or result in five people is interesting, but in 5,000 people it’s possibly a major discovery.
Doing an experiment with one person, like I was hoping to do, is essentially pointless in the scientific sense. Good to know before I got started.
It had become obvious that happiness, something everybody experiences, everybody wants, and everybody feels they understand, is far more complicated than I’d anticipated.
Professor Chambers then explained that this expense also means that very few experiments are repeated. The pressure on scientists to publish positive results (i.e. “We found something!” as opposed to “We tried to find something, but didn’t!”) is immense. These are more likely to be published in journals, to be read by peers and beyond, to improve career prospects and grant applications, and so on. But it’s also best to repeat experiments where possible, to show that your result wasn’t a fluke. Sadly, the pressure on scientists is to move on to the next study, make the next big discovery, so interesting results are often left unchallenged,3 especially with fMRI.
So, even if I could run my experiment, I really should run it again and again, no matter what the result. Even if it was not giving me the data I wanted. And that’s another thing.
The data produced by fMRI aren’t nearly as clear as mainstream reports suggest. Firstly, we talk about which parts of the brain are “active” during a study, but as Professor Chambers pointed out, “This is effectively nonsense. All parts of the brain are active, all the time. That’s how the brain works. The question is how much more active are these certain regions, and is it significantly more active than it usually is?”
To even get to the standards of “blobology,” you have to determine which blobs on the scanner are the “relevant” ones. This is a big ask when doing something as fiddly as monitoring the activity of specific areas of the brain. For starters, what counts as a “significant” change in activity? If every part of the brain shows fluctuating activity all the time, how much does the activity have to increase by in order to be considered relevant? What’s the threshold it has to get to? This can vary from study to study. It’s a bit like being at a pop concert of the latest megastar and attempting to work out who’s the biggest fan by listening for the loudest infatuated screams; possible, but by no means easy, and a lot of work.
This, as Professor Chambers explained, results in another glaring issue.
“fMRI has a huge what we call ‘researcher degrees of freedom’ problem. People often don’t decide how they’re going to analyze their data, or sometimes even which question they’re going to ask, until after they’ve run their study. And they go ahead, and they explore, and they have this ‘garden of forking paths’ problem, where in even the simplest of fMRI studies there are thousands of analytical decisions to make, each one of which will slightly change the outcome they get. So what researchers will do is mine their data at the end to find a result which is useful.”
This comes about because there are many different ways to analyze complex data, and one combination of approaches may provide a useful result, where others wouldn’t. It may sound dishonest, somewhat like firing a machine gun at a wall then drawing a target around where the most bullet holes are clustered and claiming to be a good shot. It’s not that bad, but it’s heading that way. But then when your career and success depends on hitting the target and this option is available, why wouldn’t you do it?
But this was just the tip of the iceberg regarding all the issues that come with running fMRI experiments. Professor Chambers had potential answers and solutions to all of these problems: reporting methods of analysis in advance of actually doing them; pooling data and subjects between groups to increase validity and bring down costs; changing the way scientists are judged and assessed when awarding grants and opportunities.
All good, valid solutions. None of which helped me. I came to this meeting hoping to use some high-tech wizardry to locate where happiness was coming from in my brain. Instead, my brain was left reeling with the myriad problems of advanced science, and feeling distinctly unhappy about it.
A burger and happiness are both familiar-but-pleasant end results of a ridiculously complicated web of resources, processes, and actions.
Professor Chambers eventually headed back to work, and I made my disappointed way home, my head buzzing with more than just the two beers I’d consumed during our talk. I’d started out thinking it would be relatively easy to determine what makes us happy, and where happiness comes from. It turned out that even if the scientific techniques I’d hoped to use were straightforward (which they really aren’t), it had become obvious that happiness, something everybody experiences, everybody wants, and everybody feels they understand, is far more complicated than I’d anticipated.
I see it like a burger. Everyone knows what burgers are. Everyone understands burgers. But where do burgers come from? The obvious answer would be “McDonald’s.” Or “Burger King.” Or another eatery of your choice. Simple.
Except burgers don’t just pop out of the void fully formed in a fast-food restaurant’s kitchen. You’ve got the beef (assuming it’s a beef burger) that’s been ground down and formed into patties by the supplier, who gets the beef from a slaughterhouse, which gets it from a livestock supplier, who raises cattle on grazing land and rears them and feeds them, which consumes considerable resources.
Burgers also come in buns. These come from a different supplier, a baker of some description, who needs flour and yeast and many other raw materials (perhaps even sesame seeds to sprinkle on top) to be pounded together and placed into an oven, which needs constant fuel to burn and create the necessary baking heat. And don’t forget the sauce (extensive quantities of tomato, spices, sugars, packaging assembled by industrial-level processes) and garnish (fields dedicated to growing vegetables, which need harvesting, transporting, and storing, via complex infrastructure).
And all these things just provide the basic elements of a burger. You still need someone to assemble and cook it. This is done by actual humans who need to be fed, watered, educated, and paid. And the restaurant supplying the burgers needs power, water, heat, maintenance, etc. in order to function. All of this, the endless flow of resources and labor that your average person doesn’t even register, goes into putting a burger onto a plate in front of you, which you might eat, absent-minded, while staring at your phone.
A convoluted and complex metaphor perhaps, but that’s the point. Looking closely, it seems that a burger and happiness are both familiar-but-pleasant end results of a ridiculously complicated web of resources, processes, and actions. If you want to understand the whole, you must also look at the parts it’s made up of.
So, if I wanted to know how happiness worked, I needed to look at the various things that make us happy, and figure out why. So, I resolved to do just that. Right after I’d had a burger.
Don’t know why, but I was suddenly craving one.
Dean Burnett is a neuroscientist who lectures and tutors at the Centre for Medical Education at Cardiff University, and writes the Guardian’s popular science blog, Brain Flapping.
1. Chambers, C. The Seven Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice Princeton University Press, Princeton NJ (2017).
2. Cohen, J. The statistical power of abnormal-social psychological research: a review. Journal of Abnormal and Social Psychology 65, 145-153 (1962).
3. Engber, D., Sad face: Another classic psychology finding—that you can smile your way to happiness—just blew up. slate.com (2016).
Lead Photo Collage Credit: MedicalRF.com; Fernando Trabanco Fotografia / Getty Images