In November 2019, Tal Yarkoni set psychology Twitter ablaze with a fiery preprint, “The Generalizability Crisis” (Yarkoni, 2019). Written with direct, pungent language, the paper fired a direct salvo at the inappropriate breadth of claims in scientific psychology, arguing that the inferential statistics presented in papers are essentially meaningless due to their excessive breadth and the endless combinations of unmeasured confounders that plague psychology studies.
The paper is clear, carefully argued, and persuasive. You should read it. You probably have.
Yet there is something about the paper that bugs me. That feeling wormed its way into the back of my mind until it has become a full-fledged concern. I agree that most verbal claims in scientific articles are often, or even usually, hopelessly misaligned with their instantiations in experiments such that the statistics in papers are practically useless as tests of the broader claim. In a world where claims are not refuted by future researchers, this represents a huge problem. That world characterizes much of psychology.
But the thing that bugs me is not so much the paper’s logic as (what I perceive to be) its theory of how to change scientist behavior. Whether Tal explicitly believes this theory or not, it’s one that I think is fairly common in efforts to reform science — and it’s a theory that I believe to be shared by many failed reform efforts. I will devote the remainder of this blog to addressing this theory and laying out the theory of change that I think is preferable.
A flawed theory of change: The scientist as a logician
The theory of change that I believe underlie’s Tal’s paper is something I will call the “scientist as logician” theory. Here is a somewhat simplified version of this theory:
- Scientists are truth-seekers
- Scientists use logic to develop the most efficient way of seeking truth
- If a reformer uses logic to identify flaws in a scientist’s current truth-seeking process, then, as long as the logic is sound, that scientist will change their practices
Under the “scientist as logician” theory of change, the task of a putative reformer is to develop the most rigorously sound logic as possible about why a new set of practices is better than an old set of practices. The more unassailable this logic, the more likely scientists are to adopt the new practices.
This theory of change is the one implicitly adopted by most academic papers on research methods. The “scientist as logician” theory is why, I think, most methods research focuses on accumulating unassailable evidence about what are the most optimal methods for a given set of problems — if scientists operate as logicians, then stronger evidence will lead to stronger adoption of those optimal practices.
This theory of change is also the one that arguably motivated many early reform efforts in psychology. Jacob Cohen wrote extensively and persuasively on why, based on considerations of statistical power, psychologists ought to use larger sample sizes (Cohen, 1962; Cohen, 1992). David Sears wrote extensively on the dangers of relying on samples of college sophomores for making inferences about humanity (Sears, 1986). But none of their arguments seemed to really have mattered.
In all these cases, the logic that undergirds the arguments for better practice is nigh unassailable. The lack of adoption of their suggestions reveal stark limitations in the “scientist as logician” theory. The limited influence of methods papers is infamous (Borsboom, 2006) — especially if the paper happens to point out critical flaws in a widely used and popular method (Bullock, Green, & Ha, 2010). Meanwhile, despite the highly persuasive arguments by Jacob Cohen, David Sears, and many other luminaries, statistical power has barely changed (Sedlmeier & Gigerenzer, 1989), nor has the composition of psychology samples (Rad, Martingano, & Ginges, 2018). It seems unlikely that scientists change their behavior purely on logical grounds.
A better theory of change: The scientist as human
I’ll call my alternative to the “scientist as logician” model the “scientist as human” model. A thumbnail sketch of this model is as follows:
- Scientists are humans
- Humans have goals (including truth and accuracy)
- Humans are also embedded in social and political systems
- Humans are sensitive to social and political imperatives
- Reformers must attend to both human goals and the social and political imperatives to create lasting changes in human behavior
Under the “scientist as human” model, the goal of the putative reformer is to identify the social and political imperatives that might prevent scientists from engaging in a certain behavior. The reformer then works to align those imperatives with the desired behaviors.
Of course, for a desired behavior to occur, that behavior should be aligned with a person’s goals (though that is not always necessary). Here, however, reformers who want science to be more truthful are in luck: scientists overwhelmingly endorse normative systems that suggest they care about the accuracy of their science (Anderson et al., 2010). This also means, however, that if scientists are behaving in ways that appear irrational or destructive to science, that’s probably not because the scientists just haven’t been exposed to a strong enough logical argument. Rather, the behavior probably has more to do with the constellation of social and political imperatives in which the scientists are embedded.
This view, of the scientist as a member of human systems, is why, I think, the current open science movement has been effective where other efforts have failed. Due to the efforts of institutions like the Center for Open Science, many current reformers have a laser focus on changing the social and political conditions. The goal behind these changes is not to change people’s behavior directly, but to shift institutions to support people who already wish to use better research practices. This goal is a radical departure from the goals of people operating under the “scientist as logician” model.
Taking seriously the human-ness of the scientist
The argument I have made is not new. In fact, the argument is implicit in many of my favorite papers on science reform (e.g., Smaldino & McElreath, 2018). Yet I think many prospective reformers of science would be well-served in thinking through the implications of the “scientist as human” view.
While logic may help in identifying idealized models of the scientific process, reformers seeking to implement and sustain change must attend to social and political processes. This includes especially those social and political processes that affect career advancement, such as promotion criteria and granting schemes. However, this also includes thinking through the processes that affect how a potential reform will be taken up in the social and political environment, especially whether scientists will have the political ability to take collective action to take up particular reform. In other words, taking seriously scientists as humans means taking seriously the systems in which scientists participate.
- Anderson, M. S., Ronning, E. A., De Vries, R., & Martinson, B. C. (2010). Extending the Mertonian Norms: Scientists’ Subscription to Norms of Research. The Journal of Higher Education, 81(3), 366–393. https://doi.org/10.1353/jhe.0.0095
- Borsboom, D. (2006). The attack of the psychometricians. Psychometrika, 71(3), 425–440. https://doi.org/10.1007/s11336-006-1447-6
- Bullock, J. G., Green, D. P., & Ha, S. E. (2010). Yes, but what’s the mechanism? (Don’t expect an easy answer). Journal of Personality and Social Psychology, 98(4), 550–558. https://doi.org/10.1037/a0018933
- Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. The Journal of Abnormal and Social Psychology, 65(3), 145–153. https://doi.org/10.1037/h0045186
- Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159. https://doi.org/10.1037/0033-2909.112.1.155
- Rad, M. S., Martingano, A. J., & Ginges, J. (2018). Toward a psychology of Homo sapiens: Making psychological science more representative of the human population. Proceedings of the National Academy of Sciences, 115(45), 11401–11405. https://doi.org/10.1073/pnas.1721165115
- Sears, D. O. (1986). College sophomores in the laboratory: Influences of a narrow data base on social psychology’s view of human nature. Journal of Personality and Social Psychology, 51(3), 515–530. https://doi.org/10.1037/0022-3518.104.22.1685
- Sedlmeier, P., & Gigerenzer, G. (1992). Do studies of statistical power have an effect on the power of studies? In Methodological issues & strategies in clinical research (pp. 389–406). American Psychological Association. https://doi.org/10.1037/10109-032
- Smaldino, P. E., & McElreath, R. (n.d.). The natural selection of bad science. Royal Society Open Science, 3(9), 160384. https://doi.org/10.1098/rsos.160384
- Yarkoni, T. (2019). The Generalizability Crisis [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/jqw35