In the winter of 2016 at the largest annual gathering of social psychologists in the world, my collaborators and I were awarded one of the top prizes of the field for a paper we wrote presenting new ideas on the psychology of willpower. For someone who grew up with few books in the house and with parents who occasionally struggled to put meat on the table, I should have been on top of the world.
I wasn’t.
During that same conference I revealed that the work upon which our celebrated theory was based was not replicable, not real. With a large team of co-authors, I discovered that we could not duplicate what was supposed to be an accepted truth about how willpower worked. My own new ideas relied on this accepted truth; without it, there was little left for my theory to explain.
Discovering this, letting it really sink in, undid my world. I wasn’t alone in feeling this way. Everywhere I turned, it looked like my chosen field was becoming undone. It looked like my home discipline of social psychology was not built on a solid bedrock of facts and findings that were trustworthy and real and true. Instead, it felt like we were playing scientist without actually practicing science.
Some might cringe at my characterization. After all, social psychology is in the news every day, appearing to provide science-certified answers to life’s vexing questions, including questions about COVID-19. As I write this, articles appearing in major news outlets use psychology to explain why it took us so long to recognize the gravity of the COVID-19 pandemic (blame the optimism bias!), offer advice on achieving peak mental health for a pandemic (celebrate idleness!), and offer guidance on how to best communicate about social distancing (explain why it is important!). We write bestselling books, we advise business and governments, and we affect educational policy.
Recent revelations make clear, however, that social psychology does not deserve such fawning attention. It’s not clear that people should be taking our advice or buying our books.
This might seem like a radical statement from a disgruntled scholar starving for the recognition that was always denied him. I am none of those things. By many accounts, I am a successful psychological scientist: I have published over one hundred journal articles, my work has been favorably cited by briefs to the Supreme Court of the United States, and I have given talks all over the globe. Yet, I no longer trust my early work.
Much of my work on willpower focuses on something called ego depletion. Ego depletion refers to the idea that self-control relies on a limited store of resources that gets depleted with use, sort of like mental fuel. When people use up this mental fuel, they cannot control themselves further. Ego depletion explains why if you’ve spent the morning controlling your emotions in the face of your intemperate boss, you might end up eating more fries at lunch, drinking an extra pint at the pub, or maybe flirting with a married coworker.
Given the prominence accorded self-control for a well-functioning life, ego depletion soon left social psychology and touched every other branch of psychology and allied fields, including marketing, finance, economics, neuroscience, education, and exercise science. Nobel laureates swooned over it.
Too bad it’s probably not real.
How could something come to dominate a scientific discipline for nearly two decades end up as the poster child for what many are calling the replication crisis? First, we have abused our inferential tools by massaging our data to make them say what we want as opposed to letting them reveal their truths. Second, we haven’t bothered publishing our failed experiments, an accounting of which is critical to unearth the true state of affairs. And, third, we forgot a key step in the knowledge generation puzzle—direct and independent replications. There are other abuses, but these three are why an entire generation of social psychologists is wringing its hands.
When I say we abused our inferential tools, I am not talking about fraud or scientific misconduct. We did not think there was anything wrong with these practices. We thought we were merely allowing the truth that was baked in the data to rise. We did not yet understand how badly these practices warped our scientific inferences. If your analyses don’t work out as planned, play around a little with your variables and see what happens: What happens if you add things to the mix? What happens if you exclude people with troublesome data? The point was to massage the data until it relaxed enough to reveal what everyone wanted: statistical significance.
These so-called questionable research practices, referred to more poetically as p-hacking in reference to statistical p-values, were commonplace in psychology. P-hacking was not something unscrupulous researchers did after dark when their more principled colleagues left the building. No, respectable and eminent scholars p-hacked out in the open, unashamed. My Ivy League professors explicitly encouraged us to p-hack. And because scientific writing is still an act of persuasion, we were taught to frame all the fruits of our exploration as if they were fruits of confirmation, as if we had predicted these baroque patterns all along.
Social psychology is further animated by structural priorities that stress confirmation. Our journals, universities, and scientific societies only care about our successes, deeming our failures uninformative and unworthy. If you scan any psychology journal, you will see success after success. Nearly every single psychology study published in academic journals vindicates the ideas of their genius makers. Or so it seems.
Reality is quite a bit different. Many of our hypotheses are duds. Often our tools lack the sensitivity to capture the vagaries of the human mind. We fail and we fail often. That’s simply how science works.
In psychology, however, we are prevented from seeing these failed studies, and thus learning from them. Failures to corroborate simply don’t make it into our journals; and the consequences of this omission cannot be overstated. By some counts, ego depletion has been successfully demonstrated 600 times. But, what if we later learned that it also failed 6,000 times? Our confidence in the reality of ego depletion would hinge dramatically on how many failures are out there.
One check on this shadow literature is direct replication. When we repeat our studies, we curb the animation of lifeless ideas. Unfortunately, direct replications were seen as unimportant in social psychology and hardly anyone was doing them.
Like many others, when I first learned of the warning signs—data massaging, refusal to air failed studies, shortage of direct replications—I barely shrugged. Yeah, we shouldn’t be doing these things, I told myself, but our ideas will check out. Just you wait.
What I saw next left me unsteady. Psychologists finally started verifying their studies, stress-testing social psychology by submitting it to direct replications. And the results were grim.
It started as a mere trickle. As upstart journals began publishing exact replications, even replications with unhappy endings. It was easy to delude myself, to not think much of these failed replications. But, soon a trickle became a surge and before long, much of social psychology appeared inundated with only a few solid rooftops above water.
Social psychology has been in turmoil for nearly a decade now, and some things have changed. P-hacking appears to be on the wane, and our journals occasionally publish failed studies and direct replications.
However, I am not as optimistic about our future as I was even a year ago. Scholars have grown tired of all the introspection, as such brooding makes us realize that we are dealing with even deeper crises of meaning. Agony is being slowly replaced with renewed confidence and desire to move on to sunnier times. You see this in the confidence with which people want to use findings in our field to support the response to the COVID-19 pandemic. There is noble intent here, scholars truly want to help. However, should a field that has been so debased be part of any conversation where human lives are at stake? Should we trust a field that no longer has an appetite to stress-test its canons for fear of what will be discovered?
I had hoped that the replication crisis would have left us chastened. Instead of humility, though, we seem full of pride, eager to tell the world about our latest findings. If the knowledge we produce is unmoored from reality, I believe we should think long and hard before giving advice.
Above all, we need to remind ourselves that despite wanting to move on, our field is still not well. We need to remind ourselves that the replication crisis is not over.
…This isn’t a relic of a bygone era. The idea that marijuana makes you lazy has been a recurring theme in anti-drug campaigns and remains a widely held belief. Many people still view cannabis users as unmotivated slackers, content to waste their days in a haze of smoke. But is this really true? According to recent research published in Social Psychological and Personality Science, it may be little more than a myth.
Read Here