Today, November 18, 2022, we had a presentation by Thomas Guillemaud from Inrae-CNRS-Université de Nice on the Peer Community In-Registered Reports initiative. You can find the video below, followed by the abstract of the talk!
Peer Community In Registered Reports (PCI RR): from RR recommendation to (diamond open access) publication
PCI is an alternative to the current costly, non-transparent and non-reproducible publication system. PCI is a non-profit association that creates communities of researchers who evaluate (via peer reviews) and recommend preprints in their scientific field. Among these communities, PCI RR is dedicated to evaluating and recommending Registered Reports across the full spectrum of disciplines. RRs are a form of an empirical article in which study proposals are peer-reviewed and pre-accepted before research is undertaken. By deciding which articles are published based on the question and proposed methods, RRs offer a remedy for a range of research biases, including publication bias and reporting bias. Peer review for a RR takes place over two stages. At Stage 1, authors submit their research question(s), theory and hypotheses (where applicable), detailed methods and analysis plans, and any preliminary data as needed. Following detailed review and revision, Stage 1 RR can receive in-principle acceptance (IPA), which commits PCI RR to recommend the final article regardless of the outcomes, provided the authors adhere to their approved protocol and interpret the results in line with the evidence. Following IPA, authors then register their approved protocol in a recognized repository. Then, after completing the research, the authors submit a Stage 2 manuscript that includes the approved protocol plus results and discussion. The reviewers from Stage 1 then focus on compliance with protocol and whether the conclusions are justified by the evidence. When a recommender decides to recommend a report, they write a recommendation, similar to a News & Views piece, describing the context of the study and explaining why this research is particularly interesting. This recommendation and all of the editorial correspondence (reviews, recommender’s decisions, authors’ replies) associated with the recommended report are published by PCI RR and assigned a DOI. Authors of manuscripts that are positively recommended have the option to publish their articles in the growing list of PCI RR-friendly journals that have committed to accepting PCI RR recommendations without further peer review. This includes Peer Community Journal.
About the speakers:
Denis Bourguet and Thomas Guillemaud are the co-founders of PCI. They are senior scientists at the French national research institute in agronomy and environment (Inrae) where they study the evolutionary biology of pest insect species. Denis works in Montpellier, France, and Thomas works in Sophia Antipolis, France TG: Institut Sophia Agrobiotech, Inrae-CNRS-Université de Nice, Sophia Antipolis, France DB: CBGP, Inrae-IRD-CIRAD-Montpellier Supagro-Université de Montpellier, France
October 21, 2022, we organized the yearly workflow hackathon of the CORE lab (you can read last year’s version here)! To get all the lab members on one page and to reduce error as much as possible, we have a lab philosophy that is accompanied by various documents to facilitate our workflow. But research standards evolve and we also often notice that the way we conduct our research is not as optimal as we would like it to be. As new tools seem to be created daily, we can also not keep up with new developments without hurting our research. And it is important that those who do research daily provide input in the procedures they work with. That’s why we decided on a meeting once a year, where all lab members can have their say in how we are going to work that academic year. The new product is available here. The revision is getting smaller and smaller each year!
The four somewhat more senior members (Alessandro, Olivier, Ade, Migs) named three things they liked (and definitely wanted to keep), three things they did not like (and wanted to get rid of), and three things they’d like to add (Juliana, Maria, and Tetyana did not add their comments, as they only recently joined). All of this was sent in a DM to Hans so as not to influence each other.
Hans then put all the comments in the joint Slack channel. People discussed, Hans summarized in Google Docs, and we got to final decisions through discussion during the hackathon and/or during Google Docs discussion.
Here are some things people really liked (non-exhaustive list):
Our 2-5pm Wednesday writing block
Standups (just like last year!)
Weekly/monthly meetings with Hans
Our open-science policy
The fact that we dedicate 10-25% of our resources to replication/big-team science research!
Here are our major changes (again, a non-exhaustive list):
We transformed our lab philosophy such that it is visually more attractive!
We added a section to explain the replication crisis/credibility revolution. During journal club, it became clear that some of our practices were not always clear for new members, which is why we added a section explaining the background. This also included adding this great video by John Oliver.
Here are our major changes (again, a non-exhaustive list):
We transformed our lab philosophy such that it is visually more attractive!
We added a section to explain the replication crisis/credibility revolution. During journal club, it became clear that some of our practices were not always clear for new members, which is why we added a section explaining the background. This also included adding this great video by John Oliver.
We then also did a self-assessment on various aspects:
We posted 7 blog posts in the last year, 5 of which were mostly videos/announcements of our work. We will dedicate ourselves to writing at least three blog posts per person this coming year.
Out of the currently listed projects, 14/35 are big-team science/replications. We thus far exceed our goal of such projects (in part because we also lead these projects!)
We did not do statcheck reports. We will pay more attention to this.
Only one person did plagiarism report; we will need to make sure we do this.
We insufficiently did code review. Again, we need to make sure to do this.
Most presentations were posted on our GitHub, but Alessandro will check with the different lab members if some were not posted.
We did not sufficiently update our Research Milestones Sheet (which relates to insufficient reports above). Juliana will check once every three months, so that responsibility is not diffused.
That’s it for now. Make sure to read the finished product, and we always welcome suggestions!
Together with Hans Rocha IJzerman, I took part in the conference Protection from domestic violence as a result of the war, which was organized by the Kyiv district court of Ukraine, Department of Civil process National University “Odessa Law Academy” and the Community Justice Center of Odessa. At the conference, I presented the upcoming plan for my project Improving the Embedding of Ukrainian Refugees in France and the Reduction of Psychological Abuse. I feel that this project is important: one does not have to be a social scientist to predict that Ukrainian refugees, who are already more than 6,000,000 today, feel lonely and are at risk of either perpetrating or being a victim of psychological (domestic violence).
In the upcoming years, we plan to research the causes of loneliness of Ukrainian refugees, the connection of loneliness with committing psychological violence, and to increase social connection. After the project, we intend to create recommendations for working with Ukrainian refugees in France suffering from loneliness specifically and for refugees in general. We hope to not be alone in our quest to support Ukrainian refugees who feel alone. We like to issue an open call for different stakeholders from Ukraine and from other countries to help tackle these problems. We will want to collaborate with refugee organizations, researchers (with expertise in psychometrics, interpersonal relations, or domestic violence), humanitarian organizations, and people on the ground who are interested in the plight of the Ukranian people.
If you want to collaborate or have questions or comments regarding our project, please don’t hesitate to contact me at fineline2011@gmail.com. You can email me in Ukranian, English, or Russian.
Зменшення самотності в українських біженців: виступ на всеукраїнській конференції з питань війни та домашнього насильства
Українська версія
Разом з Гансом Роша Ізерманом сьогодні приймала участь у конференції “Захист від домашнього насильства внаслідок війни”, що була організована Київським районним судом м. Одеси, кафедрою цивільного процесу Національного університету “Одеська юридична академія” та Громадським центром правосуддя м. Одеси. На конференції був представлений майбутній план мого проекту “Покращення інтеграції українських біженців у Франції та зменшення психологічного насильства”. Я вважаю, що цей проект є важливим: цілком ймовірно, що українські біженці, яких на сьогоднішній день вже більше 6 000 000, відчувають себе самотніми і піддаються ризику або вчинити психологічне (домашнє) насильство, або стати його жертвою.
У найближчі роки ми плануємо дослідити причини самотності українських біженців, зв’язок самотності з вчиненням психологічного насильства, а також посилити соціальні зв’язки українських біженців. Після завершення проекту є намір створити рекомендації для роботи з українськими біженцями у Франції, які страждають від самотності, зокрема, і для біженців в цілому. Сподіваємося, що не будемо самотніми у своєму прагненні підтримати українських біженців, які відчувають себе самотніми. Ми хотіли б звернутися до різних зацікавлених сторін з України та інших країн з проханням допомогти у вирішенні цих проблем. Ми хотіли б співпрацювати з організаціями біженців, дослідниками (з досвідом роботи в області психометрії, міжособистісних відносин, домашнього насильства), гуманітарними організаціями та людьми на місцях, які зацікавлені в долі українського народу.
Якщо Ви бажаєте співпрацювати або маєте запитання чи коментарі щодо нашого проекту, будь ласка, зв’яжіться зі мною за адресою: fineline2011@gmail.com. Ви можете писати мені українською, англійською або російською мовами. громадські організації, науковці, психологи та всі ті, хто так чи інакше хоче долучитися до допомоги українським біженцям у Франції.
Mig Silan’s EASP-IARR Presentation on May 11, 2022
Manuscript Abstract
Central to interventions to improve the quality of romantic relationships is its measurement. Yet, to what degree is the concept of relationship quality well-defined, and, importantly, well-measured? In the present article, the authors conducted a comprehensive search of instruments measuring relationship quality in ProQuest, Web of Science, and Scopus finding a total of 599 scales, with 26 meeting our definition of romantic relationship quality. When the authors investigated the 26 scales’ overlap of item-content, they identified 25 distinct categories among 754 items (with our database of items on the OSF: https://osf.io/v964p/). The mean overlap between scales was weak (Jaccard Index correlation coefficient = 0.39), indicating that these scales were very heterogeneous. The authors then assessed to what extent researchers reported internal validity in 43 scale development-validation articles. They found that Cronbach’s Alpha was most often reported (in 91% of the articles). Other aspects were reported far less often, with 55% reporting exploratory factor analyses, 26% reporting confirmatory factor analyses, 23% reporting test-retest reliability, 7% reporting measurement invariance. The heterogeneity of measures and lack of reported general validity of romantic relationship quality points to the need for concept-driven work on the assessment of romantic relationship quality.
This blog post is directed to those who are interested in coordinating and/or participating in a many-analyst project on a large, multi-site study investigating the effects of gendered job occupations and gender roles on the naming of male or female exemplars. The original authors will make ⅔ of the data available to analysis teams, who will then pre-register their hypotheses before the remaining ⅓ will be released (in case power is insufficient for this split then the analysis team can request the full dataset). My job as the editor (Hans Rocha IJzerman) is to connect potentially interested parties to each other, who can then coordinate with each other to submit one single many-analyst manuscript to the International Review of Social Psychology. The manuscript will pass a normal review by an independent editor. Below you will find a results-blind summary of the project, which is nearing its completion.
The many-analyst project can focus on a re-examination of the hypotheses set forth by Brohmer et al. using different analysis approaches and/or analyses of secondary variables not analyzed in the main project to form a more advanced theoretical model.
Summary of the project
In this project (see https://osf.io/fdrn6/), Brohmer and colleagues aimed to investigate whether generic-masculine forms of job occupations and societal roles evoke more masculine exemplars. In languages such as German, French, or Spanish, plural forms of job occupations and societal roles are often in a generic-masculine form. That is, instead of employing gender-neutral forms, people often use the male plural form to address people in general. Several studies have found that using the generic-masculine form (instead of more neutral forms) reduces the cognitive availability of women for specific societal roles and job occupations. Yet, despite the societal importance of such linguistic conventions, rarely any replications exist to demonstrate the robustness of the effect. Brohmer and colleagues picked one prominent German study by Stahlberg, Sczesny and Braun (2001, Experiment 2), which revealed that people indeed came up with more male exemplars when they were asked to “name three politicians”, “athletes”, “tv hosts”, and “singers” in the generic-masculine form (e.g., male politician or “Politiker”), compared to two alternative gender-neutral forms (naming both form: female and male politician or “Politikerinnen und Politiker”; internal-I form: “PolitikerInnen”). As an extension, Brohmer and colleagues included two more forms – a neutralized control form (which does not imply a gender; e.g., “Name three persons from politics”) and a popular and the more inclusive gender-star form (“Politiker*innen”) along with several other control variables. Moreover, they added two more job categories (“writers” and “actors”) to increase precision of the outcome estimates. Data collection for this replication and extension study across eleven labs and N ≥ 2100 participants is nearly finished.
For the main analyses of the project, the authors perform generalized linear multilevel models analyzing how many women were mentioned across the six job categories (with a maximum of three persons, categories being nested in participants). For the confirmatory analysis, participants’ own gender and perceived base rate per category were included as predictors (i.e., they indicated how many men or women are in these jobs).
In addition, Brohmer and colleagues collect data on the following list of variables that are not analyzed as part of the main projects’ confirmatory analysis. A complete list of variables is provided below (* marks variables for the confirmatory analysis).
if applicable: perceived reasons why for not mentioning women or men (4 answer options and open text field)
age
political orientation
social dominance orientation (short scale, 3 items)
social equality preference scale (short scale, 3 items)
location of residence
highest education
nationality
distraction during answering questions
three funnel debriefing questions
The researchers analyzing the dataset are also welcome to add additional variables to the dataset via data-scraping (see e.g., here for the first of a series of posts on data-scraping).
If you are interested in signing up for the project, please go to this form. The editor will then connect the people interested in this many-analyst project. We anticipate that the data will be available in October.
Today, November 25, 2021, we organized the yearly workflow hackathon of the CORE lab (you can read last year’s version here)! To get all the lab members on one page and to reduce error as much as possible, we have a lab philosophy that is accompanied by various documents to facilitate our workflow. But research standards evolve and we also often notice that the way we conduct our research is not as optimal as we would like it to be. As new tools seem to be created daily, we can also not keep up with new developments without hurting our research. And it is important that those who do research daily provide input in the procedures they work with. That’s why we decided on a meeting once a year, where all lab members can have their say in how we are going to work that academic year. The new product is available here. Generally, the revision was much smaller than in previous years.
All lab members (Alessandro, Olivier, Ronan, Ade, Migs) named three things they liked (and definitely wanted to keep), three things they did not like (and wanted to get rid of), and three things they’d like to add. All of this was sent in a DM to Hans so as not to influence each other.
Hans then put all the comments in one Google Doc on which the lab members got to vote (Hans vetoed two things).
This year, we placed our emphasis a bit more on our research templates and reviewed them in more detail to see if they needed adjustments. We then discussed matters that people disagreed about or that were unclear and then revised the documents where necessary. Below you can see what we were discussing/working on.
Here are some things people really liked (non-exhaustive list):
Our research templates in general (our team members thought they were more extensive than the standard templates provided by the OSF and by other teams)
Our journal club and our 2-5pm Wednesday writing block
Standup (fundamental according to most!)
Independent code review
Here are some of our major changes (again, non-exhaustive list):
We made the goals of our lab broader. Migs Silan joined us and he will focus on relationship quality, not just on co-regulation.
Hans will not do reviews for for-profit journals anymore (except when reciprocating) and follows Felix Schönbrodt’s reasoning for that.
Some things we realized we need to do better (again, non-exhaustive list):
We did not update our Research Milestones Sheet as well as we could.
We have not signed collaboration agreements. Alessandro and Ade will create them for their projects.
We have not used CRediT for our projects. We will apply them.
We have not done independent code reviews for all of our projects.
We decided that an independent reviewer from the CORE lab will do some checks on people’s projects to ensure some of these (and other) tasks are completed.
We also did not really release many products or update our presentations on GitHub frequently enough. We will try to implement a workflow that improves this.
We also did some other things:
We created a joint Zotero library
We did a self-audit on our contributions to replications/big team science. We had committed to doing 10-25% of our projects to big team science/replications. We are at 25% now.
We revised our lab canon (thanks Lorne Campbell and Denny Borsboom for your suggestions!)
In the coming weeks, we will dedicate ourselves to finalizing our Zotero libraries. Overall, it seemed like the lab philosophy has stabilized to some extent and we will focus mostly on consolidating existing practices, particularly for those in the last year of their Ph.D.!
At the recent conference for the Association for Psychological Science, Alessandro Sparacio gave a talk about his meta-analysis on self-administered mindfulness and biofeedback and whether they reduce stress (or stress’ consequences). You can find the abstract below the video and the preprint here.
Abstract
We conducted a pre-registered meta-analysis to appraise available evidence on two stress regulation strategies: Self-administered mindfulness meditation and heart-rate variability biofeedback. We used a combination of keywords to find as many experimental and observational studies as possible, all of which highlighted a link between the two strategies and different components of stress (physiological, affective, and cognitive) and affective consequences of stress. To provide publication bias-corrected estimates, we employed multilevel regression-based methods and permutation-based selection models. We were unable to detect evidence in favor of the efficacy of either strategy, despite its widespread use. We recommend rigorous, well-documented Randomized Clinical Trials to evaluate whether self-administered mindfulness and heart-rate variability biofeedback have a demonstrated efficacy in decreasing stress.
Patrick Forscher, who has received the exciting news that he is taking up a new job at the Busara Center, was part of a panel discussion at Metascience2021 (together with Nick Coles and Max Primbs, two of the board members of the Psychological Science Accelerator). You can watch the panel discussion below.
Video hosted on the Center for Open Science’s YouTube Channel.
The PI of the CORE Lab, Hans (Rocha) IJzerman, gave an “expert talk” yesterday at the Special Interest Research Group (SIRG) on the Social Neuroscience of Human Attachment about the lab’s research on Social Thermoregulation. You can see the video of the talk below!
In the past, we have led and participated in various multi-site studies. When we do those studies, we recruit and rely on many (kind) colleagues. Often however, these studies are led by a select few. We want to change this and support others. Consider this as a start.
Because we have experience in studies on social thermoregulation and because we think a past study of ours should be replicated, we have prepared/are preparing a multi-site replication study on social exclusion and peripheral temperature. You can find all the details about this project on its dedicated OSF page. We have done a power analysis of the study we replicate, prepared a project management sheet (thanks Nicholas Coles for an excellent example), instructions to submit the IRB application, an example IRB application, and various other elements that we believe are necessary to conduct the study (see all currently available files in the Google Drive for the project). We are currently finishing the power analysis, the data analytic script, and the Qualtrics file for the project.
We are now looking for a researcher that wants to lead this project. From the lead researcher, it will be expected that they decide on whether they want to do a pre-registration or a Registered Report, write the pre-registration or Registered Report, revise our initial work (if necessary), lead the project, be a project manager, interpret the findings, and finalize the publication.
We will provide sensors, mobile phones, and server space to participating researchers, and will train researchers how to use the sensors. We will also support in conducting the analysis, preparing the Registered Report and/or preregistration, and preparing the final report. We may be able to provide some compensation for sending the materials to different location, in case participating sites do not have the funds.
If you are interested in becoming the lead researcher, please send an email to us at h.ijzerman@gmail.com with a brief motivation, indicate why you can do the study (this can for example consist of your prior experience), and a proposed timeline (not longer than one page).
We will give preference to researchers that are underresourced.
This blog post was written by Anna Szabelska and Hans IJzerman.
At this past conference for the Society of Improvement of Psychological Science, Adeyemi Adetula, PhD student at the CORE Lab, gave the closing keynote on the topic of Synergy Between the Credibility Revolution and Human Development in Africa. A preprint of a manuscript that he led is available on AfricArxiv. The video of his talk you can view below (cross-posted from SIPS OSF’ page). Ade’s talk starts at 3 min 31 sec.
In the previous post of this series, we introduced you to the API we chose to perform data scraping, we described the key part of the data scraping scripts one of us (Bastien) programmed, and we provided the link to the GitHub Repo where the scripts are available. If you haven’t already done so, we suggest you read that post and the one before as it may help you better understand the content of this one.
In the third and last post of this series, we will show you how to use the scripts Bastien programmed with only minimal programming knowledge.
Prerequisites
The scripts are written in Python, a free and open-source programming language. The code should run fine on any device (whether it is a Windows, Mac, or Linux system). However, you may face errors when using the scripts on a non-Windows system (as they have been specifically programmed for use on a Windows System). In case you use the scripts and you don’t get it to work, don’t hesitate to contact Bastien (he even helps Mac users). Here are the three steps you have to take before using the scripts:
Install Python – If you don’t have Python on your device, you can download its last version from this page. We recommend you run through the installation with default settings.
Download the required Python modules – Some modules (if you are familiar with R, modules are comparable to packages) that are not included in the default Python environment are needed for the script to run: requests, pandas, and pause. You need to install these for what we will do. The Python documentation provides enough information on how to do so.
Download the scripts – The scripts to do the web scraping are deposited on the following GitHub Repo. Download the repository as a .zip file and extract it where you prefer to store it.
Subscribe to the Aerisweather API – Aerisweather currently offers different packages whose price ranges from $80 to $690 monthly. You can find a breakdown of the different packages they offer here. In order to collect historical weather data, you’ll need a package with the “Archive add-on” selected. If you don’t have the funds and want to combine forces with other readers of this blog, leave a comment and you can coordinate with each other.
Preparing your dataset
You will need a dataset on the basis of which you will scrape data. You will need to prepare the dataset before you start the web scraping. The way the scripts are programmed will require you to transform your dataset from long into wide format, if it is not already (that is, a single participant will occupy a single line in your dataset). Your dataset must also be saved in a .csv format, with the separator being “;”.
In order to collect weather data for your dataset, some information is mandatory for each participant:
id – This corresponds to the unique identifier of your participant. It can simply be a series of letters/numerical values. For instance, “participant_1”.
location – This corresponds to the latitude/longitude combination of the location for which you want to collect weather data (for instance, the location where the participant completed your study). Each value must have the following format: “latitude,longitude”. Example for Paris: “48.856613,2.352222”. Different tools are available online to estimate the latitude/longitude coordinates of a location (e.g., https://www.latlong.net/). If you need to retrieve a massive number of coordinates (as it has been the case for us), you’ll probably have to figure out a way to do this in an automated way. To give you some ideas of how you can do this, check the following GitHub repo. It contains an annotated script (as well as sample files) that Bastien programmed to map 11,000 US zip codes to latitude/longitude coordinates.
date – This corresponds to the date for which you want to collect weather data (for instance, the date during which the participant completed your study). For our script, we based ourselves on a specific date and time. For that reason, each value must have the following format: “DD/MM/YYYY HH:MM”. Example: “21/06/2020 16:20”. Importantly, the time zone in which the date is saved must be the time zone of the location for which you want to collect weather data.
Collecting the data
After you have downloaded the files, you will have to locate the “aerisweather_keys.py” file and edit it with the text editor of your choice. You’ll have to replace the default value of different variables by the value that matches your setup. Here is a breakdown of the different variables that you’ll have to edit (details on the format that each variable must take are documented in the “aerisweather_keys.py” file):
CLIENT_ID: Id key of your Aerisweather account.
CLIENT_SECRET: Secret key of your Aerisweather account.
DATAFILE: Name of your datafile.
PARTICIPANT_ID_COLUMN: In your datafile, name of the column in which is documented the identifier of the participant.
TIMESTAMP_COLUMN: In your datafile, name of the column in which is documented the date for which you want to collect weather data.
EXACT_LOCATION_COLUMN: In your datafile, name of the column in which is documented the latitude/longitude combination of the location for which you want to collect weather data.
UTC_TIMEZONE: UTC timezone of the time displayed on your computer.
YEAR: Year for which you want to collect the “yearly averages data” (see below).
SUMMARY_FOI: Weather variables of interest for the “yearly averages data” (see below).
ARCHIVE_FOI: Weather variables of interest for the “specified date data” (see below).
Once you have set the values that match your setup, you are ready to collect your weather data.
You can now run the “aerisweather_run.py” script. You will be prompted to choose the type of data you want to collect:
Specified date data
The script will collect the weather data of the day documented in the “date” column of your dataset (“ontime” timing), for each participant in your dataset. It will also collect weather data of the day prior this day (“day-1” timing), and the second day prior this day (“day-2” timing). Data will be saved in a “results” folder (1 file per timing), with the filename format being “specified_date_TIMING_PARTICIPANTID.json”. For instance, “specified_date_ontime_participant_1.json”
Yearly averages data
The script will collect the weather data of the 12 months of the year you specified in the “aerisweather_keys.py” file, for each unique location in your dataset. Data will be saved in a “results” folder (1 file per month), with the filename format being “averages_YEAR_LOCATION_FIRSTDAYOFMONTH_LASTDAYOFMONTH.json”. For instance, “averages_2019_48.856613,2.352222_2019-01-01_2019-01-31.json”
Once you have chosen the type of data you want to collect, the final step is to choose the number of threads (between 1 and 10) that the script will run simultaneously. This corresponds to the number of accesses to Aerisweather that will be made simultaneously. Unless you are doing some testing, we recommend you to go with 10 as it will result in collecting the weather data faster.
Merging the data
If you have successfully collected the data from Aerisweather, you should have a “results” folder filled with .json files. Analyzing the data could be challenging if you don’t process the data.
The “aerisweather_statscomputing.py” script allows you to process and merge the data in one single .csv file (where the separator will be “;”). Of particular import is that you can only run this script if you keep the default variable choices in the “aerisweather_keys.py” file (at least for now).
When running the script, you will be prompted with the message you got when you started the “aerisweather_run.py” script (i.e., you must choose which type of data you want to merge).
Below you can also find the codebook of the resulting .csv file, for each type of data:
Specified date data (filename format is “analytic_data_averages_YEAR.csv”)
This was the last post of this series on data scraping.
In the last post of this series, we gave you the remaining guidelines on how to use the scripts Bastien programmed to collect your own weather data on Aerisweather.
Finally, if our posts on data scraping sparked any interest in you for that skill, we can only encourage you to jump in. Knowing how to scrape data is certainly a valuable skill at a time where the amount of data available online is increasing exponentially. As long as you feel comfortable with a programming language that allows you to perform web requests and as long as you understand the basic principles of data scraping, you shouldn’t need any specific training to code your first data scraping scripts.
This post was written by Bastien Paris and Hans IJzerman
In the previous post of this series, we defined the concept of data scraping and we introduced you to its key principles. If you haven’t already done so, we suggest you read this previous post as it may help you better understand the content of this one.
In the second post of this series, we will introduce the API we chose to perform data scraping, we will describe the key part of the data scraping scripts one of us (Bastien) programmed, and provide the link to the GitHub Repo where the scripts are available.
AerisWeather API
AerisWeather provides an API that satisfied all our needs for our research projects:
Their API provides historical weather data from 2011 onwards for various weather variables (e.g., temperature, wind, sky coverage, or humidity).
Their API supports most locations across the world.
Importantly, their pricing is one of the most competitive: Our $245 subscription plan allowed us to collect weather data for three different research projects (for a total of more than 75,000 participants and more than 10,000,000 datapoints), and we were still far from the use limit set by this plan. Note that it is very important to evaluate pricing before programming, as there can be wide ranges in the prices suppliers set.
Once we found the API, the next step was to program a script that would communicate with the API to collect the data we needed. In order to facilitate this task, API providers always share some documentation on how to use their API (e.g., https://www.aerisweather.com/support/docs/api/).
Below we describe the key part of the data scraping scripts Bastien programmed to collect data from AerisWeather.
Collecting the data and saving them
Below is a sample of the dataset for which we want to collect weather data:
You can see that each participant is associated with a date (start_date variable, which is the time at which the participant started the study) and geographic coordinates (geo_coordinates variable, which is the latitude and longitude combination of the place where the person participated in the study).
Let’s say that, for each participant, our goal is to retrieve the weather data of the day during which the participant completed the study, for the location to which the participant is associated.
Our script will have to repeat one key step for each participant: collect and save the weather data for the time/location combination associated with the participant.
To collect these data, a brief look at the API documentation suggests the use of the following URL structure:
Below is the URL that will return the weather data of interest for participant_1 (of course, you could have picked different variables; these are the ones we determined of interest for our projects. For the list of variables and what the abbreviations below mean, see here):
Below is a sample of the data for our participant from Cardiff on January 27th, 2021 when accessing the URL above:
AerisWeather API’s documentation describes how the data are structured. In short, the data is returned in what is called a “JSON format”, which is a standard format to store data in a structured fashion (like XML).
Once the data is returned to you, you just need to save them on your computer with a filename format that your script will apply for each file (so that you can easily find the data you’re interested in). In that case, saving them with the following filename format “PARTICIPANTID_TIMING.json” (e.g., “participant1_DayOfDataCollection.json”) would seem appropriate.
Now that we have the method to collect and save the weather data for one participant, we can just apply this method to all participants by using loops.
Concluding notes
In this post, we introduced you to the API we chose to perform data scraping and we described to you the key part of the data scraping scripts Bastien programmed. If you want to download these scripts, you can do so from our GitHub repo. When you have downloaded them, you are ready to read the final, third post where we will show you how to use these scripts with only minimal programming knowledge.
This post was written by Bastien Paris and Hans IJzerman
One of the main research topics in our lab is social thermoregulation. Therefore, much of our research involves the collection of temperature data in various forms (like the participant’s core or peripheral body temperature or the ambient temperature in the lab).
For one of our projects we are conducting this year we focused on a slightly different temperature: historical weather data from over 26,000 locations between the years 2012 and 2020. We also collected weather data for other projects, including projects by the Psychological Science Accelerator. Overall, we needed to collect weather data for more than 10,000,000 datapoints.
Unless you would be willing to retrieve all these data manually, some programming skills are necessary to complete this task. This series of three posts will introduce you to the massive extraction of online data, a practice that can be labeled under the term “web scraping”.
In the first post of this series, we will define the concept of web scraping and introduce you to its key principles. In the second post, we will introduce the API (defined below) that we chose to perform data scraping and discuss the key data scraping scripts (as well as make available a public version of our data scraping scripts), and then in the third post, we will show how to use the scripts even for those minimally familiar with programming.
What is web scraping?
The Wikipedia definition of web scraping is “data scraping used for extracting data from websites”. In other words, web scraping consists in extracting data stored somewhere on some website.
Here is a concrete example: when you go to the site https://www.weather.com, you will get the current temperature and weather conditions of a location. Here was what we got when we went to this site:
Scraping the temperature and weather conditions displayed on this page would consist in extracting the following data: 27°C and Très nuageux (= very cloudy in English).
Extracting these data manually is easy and doesn’t take much effort: you just need to access the URL above and read the page. But now imagine you have to do this 10,000,000 times for very different locations. This can be a royal pain and simply not worth the effort. Data scraping helps you automatize what would have been a colossal task if completed manually.
How can we automate data extraction from a website?
The data you see on a web page are all stored in the source code of that web page (in a more or less transparent manner). Here is a sample of the source code of the weather.com page that we previously accessed (you can see how to access the source code of a web page in Google Chrome here):
As you can see, the temperature and weather conditions data we were interested in (27°C and Très nuageux) are documented in the source code of that web page. Because the information is available in the source code, you can program a script that extracts the data by “parsing” the source code of the web page. Programming a web scraping script requires programming knowledge in a language that allows you to perform Internet requests (like python or R).
Extracting data from the source code of a web page is possible because the source code of a web page tends to have the same structure over multiple accesses. Compare it to a house: if we built various clones of your house, you don’t need a map to find your way to the bathroom in one of the clones, meaning that you don’t have to worry in case you have an emergency.
You can try for yourself by searching for weather data for another place in the search bar of https://www.weather.com and you’ll find that temperature and weather conditions are both documented in the source code of the web page in a very close manner across accesses (again, see here how to find the source code of a web page in Google Chrome):
The role of an API
Perhaps you may have gotten the impression that web scraping is not always an easy task. The reason why we phrased it the way we did is that most online services are not exactly thrilled to share their data without financial compensation. This leads most of them to block data scraping attempts.
Here are a couple roadblocks you can encounter when scraping the content of web pages:
Data you see on a web page may not be documented in the source code of that web page (or at least not directly, if it has been generated dynamically through JavaScript, for instance).
The structure of the source code can vary from access to access, with variable names being generated dynamically.
The website can detect and block your scraping attempts by capping your number of access per minute (when not blocking all accesses).
In our case, extracting massive historical weather data required us to subscribe to an Application Programming Interface (better known as API).
To put it short, an API is an interface between you (counterpart A) and the service from which you want to extract data (counterpart B). This interface is built to ease the communication between the two counterparts, with communication means standardized for both asking and receiving the data.
Different APIs exist for the same information (like the weather). Their prices vary depending on multiple factors such as the quality of the data provided or the number of accesses granted within a given time period.
Concluding notes
In this post, we described the basic principles of web scraping and how to perform it.
In the second post of this series, we will introduce you to the API we chose to perform data scraping and we will describe the key part of the data scraping scripts one of us (Bastien) programmed.
This post was written by Bastien Paris and Hans IJzerman
Implicit measures – the results of indirect computerized reaction-time attitude measurements such as the Implicit Associations Task (the “IAT”, Greenwald, McGhee, & Schwartz, 1998) – are often described as reflecting “attitudes that people are unable or unwilling to report” (e.g., www.projectimplicit.org). This suggests that people are either entirely unaware of their “unconscious” attitudes, or unwilling to reveal them honestly. I will present a series of projects that questions the utility and veracity of both of those explanations and show that under many circumstances people are both willing and able to report on the evaluations reflected in implicit measures. Instead, I will present a model that can explain divergences between implicit bias and explicit attitudes in terms of other psychological (attention, failures in affective forecasting) and methodological (stimulus effects, calibration) factors. These findings have implications for cognitive and applied social psychology. Theoretically, they can lead to a better understanding of automaticity and consciousness in social cognition, suggesting a distinction between three types of conscious awareness of one’s own cognitions. From an applied perspective, a better understanding of the underlying cognitions can lead applied researchers to design more effective interventions targeted at acknowledgement of racial biases. Discussion will focus on how simplified and inaccurate explanations of implicit bias may not only be inaccurate, but thwart the way for social-cognitive research to have an impact on important societal issues.
You can download the referenced training materials (the videos, slides and embedded audio files, and scripts for each video) from our OSF Tutorial Videos page:https://osf.io/8akz5/.
The CREP training videos are also directly available on YouTube:
There is increasing recognition of the need for greater inclusion of researchers from the global south (Arnett, 2008; Dutra, 2020; Kalinga, 2019; Tiokhin et al., 2019). One way to achieve inclusion is to provide access to (or create) training materials for researchers to use readily available open science tools. We are trying to tackle this challenge by engaging African researchers in the Collaborative Replications and Education (CREP) project. This process involves many steps, and we first needed to provide materials for African researchers and platforms for knowledge exchange while minding the research cultures and practices in these countries.
In this post, we describe one part of this process of supporting African researchers using the CREP model: the creation of training materials through a workshop and hackathon at the 2021 annual meeting of the Society for Personality and Social Psychology (SPSP). Our efforts were supported by an International Bridge-Building grant awarded by SPSP, which provided Internet access for one year, a free one-year membership, and free conference registrations to 15 African researchers, allowing them to participate in the workshop. We will describe the process we followed for our workshop and summarize the feedback we received. This experience taught us that though the CREP model is adoptable in Africa, there is need for greater support and more training to propagate the use of CREP amongst African researchers.
What is the Collaborative Replications and Education Project?
The Collaborative Research and Education Project is a collaborative research platform for instructors and students of tertiary institutions worldwide. The initiative was a response to improve research credibility, participation, and inclusion in research globally by providing a collaborative platform to conduct crowdsourced research replication projects (Wagge et al., 2019). It was founded in 2013 by Jon Grahe, Mark Brandt, and Hans IJzerman “to provide training, support, and professional growth opportunities for students and instructors completing replication projects, while also addressing the need for direct and direct+ replications of highly-cited studies in the field” (Grahe et al., 2020). The CREP goal of training and creating opportunities while adhering to open science practices thus strongly aligns with our goal of supporting African researchers in adopting open science practices.
The general CREP model
Our identification with the CREP is not a coincidence, as its model was designed to conduct crowdsourced replication studies with structured procedures and feedback moments to encourage open science practices. The CREP is flexible to accommodate users’ realities, but does not compromise rigor. The CREP model also has some built-in teaching procedures to help users conduct these replication studies. For example, users must create an OSF page, document the data collection procedures, share their research methods and procedures and data on the projects’ OSF page to be reproducible.Once completed, the page goes through a review of more experienced CREP collaborators. For more information on how to conduct a CREP study, see their step-by-step guidelines. For researchers currently interested in participating in CREP projects, visit the CREP OSF page for their ongoing studies.
Creating the CREP training videos
As part of our larger mission to involve more African researchers in collaborative research, we are looking to work with African researchers on CREP crowdsourced replication studies, the CREP Project Africa. These include two replication studies of which African researchers conduct the first study, while African students conduct a second study. Given that most of our African collaborators are not unfamiliar with some of these open science practices, and that using the CREP requires some training, we decided to create training videos. Here is what we have done so far:
Stage 1: Creating CREP training videos. Dr. Jordan Wagge, the current Executive Director of CREP, created a series of four CREP training videos. During the Psychological Science Accelerator’s 2020 virtual conference, we organized a hackathon, where researchers from Malawi and Nigeria provided feedback on the videos Dr. Wagge created. The attendees gave feedback on the content, quality of delivery, and the usefulness of the videos. The feedback was used to finetune these videos; Dr. Wagge made changes to the videos and then posted them on YouTube.
Stage 2: Subtitling the CREP training videos. The original videos were recorded in English. However, in order to reach a wider audience, particularly for a multilingual population like the African one, it is imperative to communicate the content of these videos in languages spoken in Africa. We sought our African collaborators’ help in translating the scripts of these videos. We subtitled these videos in six languages (Arabic, Chichewa, Igbo, Portuguese, Swahili, and Yoruba) commonly spoken in several African countries. These videos can be accessed via the links below:
Researchers interested in using these videos to train non-English speaking users can assess the subtitles by clicking on “Settings” to pop-up a list containing “Subtitles/CC.” Click on “Subtitles/CC” to reveal the list of language subtitles and click the preferred language.
Training African researchers: the SPSP 2021 workshop and hackathon
To effectively deliver training on open science and the CREP model, we proposed to the Society for Personality and Social Psychology (SPSP, https://www.spsp.org/), an organization of social and personality psychologists, to organize a workshop and hackathon on open science and to create syllabi for teaching CREP as a research methods course. Our proposal was approved for inclusion in their 2021 virtual convention, and the event was funded by SPSP’s International Bridge-Building Award.
First, as our goal is to give African researchers the tools to understand the CREP model so that they can pass these insights on to their students and collaborators, we provided African researchers with syllabi that use the CREP model so that they can be integrated into research methods courses in Africa. These syllabi outline how to structure the course, key topics to be taught, and recommend resources to teach the CREP model. For example, Dr. Wagge prepared a CREP Sample syllabus and Dr. IJzerman, Course Syllabus Introduction à la Recherche.
Second, the workshop we organized consisted of 45 minutes of two introductory talks on open science and CREP. First, Dr. Patrick Forscher described how open science movements increase access to research articles, assets, and collaborators, and how the CREP fits into these movements. Dr. Jordan Wagge then described the CREP model of selecting CREP projects, users’ guidelines for conducting CREP studies, data analyzing processes, and CREP users’ benefits. After these talks, we organized another 45 minutes of hackathon in which Dr. Wagge led the discussion on adopting the CREP models as a research method in African tertiary institutions. To capture individual participants’ thoughts about the CREP model, we then broke out into two discussion rooms providing leading questions on 1) Where would the CREP model fit in the curriculum? 2) What are the barriers to adopting CREP? and 3) What are the benefits to students and supervisors? The recorded video of the workshop is available on Whova SPSP 2021 event video gallery.
Feedback on the SPSP workshop from African researchers
Perhaps the CREP model is not as useful as we thought it would be beforehand? To further understand the level of interest from African researchers and students in using the CREP model and the feasibility of teaching the CREP model in African institutions, we surveyed 16 African researchers (from Cameroon, Malawi, Nigeria, and Tanzania). These respondents either attended our SPSP workshop or have gone through the training materials. We present participants’ responses in the chart below.
The African researchers in our workshop thought the workshop and hackathon were helpful (panel a). While all respondents agreed to the teaching of open science and open science practices in African institutions (panel c), about 70% expressed that African institutions would be very interested in teaching open science (panel b).
When asked about the willingness to learn and teach open science by African students and researchers, respectively, about 80% of respondents had no doubt African students would be open to learning open science. However, only about 30% of respondents were definite that African researchers would be willing to teach open science.
If we considered the question about adopting the CREP model as a research method, 68.8% of the respondents were definite about the CREP model adaptable for their research projects (panel g). About 70% of the respondents indicated that adopting the CREP model for African students’ research work is very feasible (panel h). However, they were not very optimistic for African institutions adopting the CREP model for students’ research works, with just 25% being explicit about the idea (panel f).Meanwhile, respondents to the survey indicated that several barriers to adopting the CREP model in Africa exist, such as 1) poor infrastructure and resources, more importantly for African students to participate in CREP, 2) African researchers low familiarity with the CREP model and open science tools, and 3) (in)availability and (in)accessibility to CREP materials and tools.
Takeaways from the SPSP training workshop
Overall, we have been working with 47 researchers for CREP Africa. Out of those 47, 19 expressed interest in participating in the convention and workshop (we don’t know why others could not or did not want to attend). There was thus quite a reasonable interest from our colleagues in participating in international events. However, many African researchers are underresourced (e.g., funding), and research infrastructures in these countries are often lacking (e.g., unstable internet connection). Researchers and research organizations in developed countries can support these researchers by creating initiatives and providing small grants (for membership and conference registration fee waivers) to allow African researchers to participate in these global events regularly.
Specifically, in our workings with African collaborators on this workshop, we noted that 1) some apps and other online processes might have been relatively new to African researchers, and 2) resources needed to keep up with the discussions might not have been accessible to them. What we noticed that really helped our collaborators was just brief engagement prior to the start of the event. What can really help is taking a brief moment to walk through detailed instructions on the conference registration procedures, navigating the event’s apps, and accessing the convention sessions. Concerning the accessibility of resources, one can share resources (like links, tools, training materials) needed before the events to allow participants familiar with the concepts to be discussed.
More specifically, for our workshop, we observed that adopting the CREP model in Africa seems more feasible, drawing from the feedback we got from our collaborators. However, we noted the realities of resources (such as unstable internet access, computer literacy) and the unpopularity of the CREP model might impact African researchers’ involvement.
As ways to tackle these barriers, African collaborators suggested regular training programs, resource available and accessible, providing grants for underresourced researchers, propagating the CREP model in African institutions by involving African researchers and students more in the CREP projects.In conclusion,organizing training workshops for African researchers can be beneficial. Indeed, it provides the necessary experience for both the trainers and trainees. Specifically, it can help build African researchers’ research capacity, give African researchers much-needed resources and international conference experience, and improve participation and collaboration between African and non-African researchers.
Acknowledgements
We are very grateful to the SPSP 2021 convention organizers for their organizational and financial commitment that allowed 15 African researchers to participate in the convention and more specifically the diversity committee that supported the SPSP International Bridge-Building Award. Further, we are really grateful to Travis Clark for all his help and the extensive administrative assistance. Finally, we want to express our gratitude to our African collaborators who helped translate the CREP videos (see list of translators).
This blog post was written by Adeyemi Adetula, Dana Basnight-Brown, Jordan Wagge, Patrick Forscher, and Hans IJzerman.
Olivier Dujols and Hans (Rocha) IJzerman gave a joint talk for the virtual International Conference on Environmental Ergonomics on its first birthday. Olivier talked about his STRAEQ-2 project and Hans talked about his book Heartwarming. A handout for the talk can be found here and the video of the talk can be found below. There are a ton of fascinating talk on thermoregulation on vICEE’s website. Go check them out when you have a chance!
I find the Psychological Science Accelerator one of the more exciting initiatives in psychological science. The PSA can potentially solve many known complicated challenges within our discipline. From complex problems pertaining to replicability, generalizability, strategy selection, inferential reproducibility, and computational reproducibility, PSA’s Big Team Science approach has the potential to tackle them all. And yet, despite the many obvious strengths and its promise, I fear for the PSA’s future if the next few years will not be dedicated to the development of an organization that has long-term relevance and sustainability. Below I outline what I think should be the PSA’s main priorities, but please read my final paragraph.
The PSA should become more globally diverse
It is my firm belief that without better representation from across the globe, the PSA has little of value to contribute. We know that a skewed representation in our field is a problem; the Society for Personality and Social Psychology, hands ~80% of its awards to US-based researchers. In the top 5 developmental psychology journals published between 2006 and 2010, less than 3% of the authors came from countries outside North America and Europe. In the PSA’s first published report, most contributing labs were from North America and Europe (e.g., only three African countries participated). Our recent study capacity report (Paris, Forscher, IJzerman, 2020) showed that North Americans were overrepresented in leadership (12/19), while Western Europeans were overrepresented in terms of membership (41.44%). By building better PSA infrastructure across the globe, the PSA will be better positioned to generate theories that more accurately reflect the human psyche.
The PSA should focus on financial sustainability
Organizations with lofty goals like the PSA’s can run on volunteerism only for so long. In Chris Chartier’s 2020-2022 strategic plan, he estimated that the PSA currently runs on 26,930 hours of volunteer time. Relying on so many volunteer hours, on top of our normal jobs, is unsustainable. What’s more, needing to rely on volunteer hours means that only researchers in more luxurious positions (i.e., those in richer countries, with a lower teaching load, and in situations where they don’t have to worry about their immediate physical safety) can be a candidate for leadership roles.
Without financial compensation of key positions, we run the risk of imploding the PSA. We run the risk of burning out our contributors, we run the risk of mismanaging the research process, we run the risks of running into major interpersonal conflicts, and we run the risk of making major mistakes in the research we conduct. Of particular import is that we also cannot attain the diversity goals from the previous section.
In order to meaningfully contribute to societal questions, financial sustainability should be a second key priority. As Patrick Forscher and I have outlined in a recent blog, various strategies to obtain funding exist, each with different balances of risk versus reward (e.g., grant writing, membership fees, support from industry). The upcoming year should be dedicated to finding an optimal balance of the winning strategies. Which strategies we favor should be discussed through surveys and panel discussions with our membership, so as to not to estrange those we serve.
The PSA should reflect on its research priorities
If I honestly reflect on the PSA’s abilities to solve problems, I am pessimistic. For a crisis like the one we are in at the moment, do we have the ability to develop an equivalent to a vaccine? My answer is probably no. When I reviewed the past research projects of the PSA, I found us conservative in terms of content (albeit not in methodology), only modestly invested into theoretical development, and quite US-focused. I personally have come to favor quantitative exploratory research methods to help develop formal theories. Other PSA members favor problem-focused research. Yet other PSA members focus on qualitative research. Still other PSA members have suggested that the PSA should have regional priorities, rather than research priorities determined by a single proposer from a single country. How can we ensure that the PSA delivers on its promise to help psychology develop formal theories as should be the norm within a mature science? I believe we should engage in a period of reflection to determine these priorities, driven again by a discussion between members through surveys and panel discussions.
Such a period of reflection can let us ask important questions about how the PSA should function, such as:
Should we have various quotas for different types of research (e.g., exploratory, confirmatory, replication) and/or for research that is currently underserved (e.g., qualitative or domains other than social psychology)?
Should we focus on regional initiatives (as we could have) or have a centralized system (like we have now)?
Should our research primarily be problem- as opposed to theory-focused?
These are but a few questions we can ask ourselves. I believe asking these questions is necessary to fulfill the PSA’s problems to solve major societal problems and/or theoretical questions and will ultimately also be determinant for the PSA’s chances for receiving funding.
Why should you vote for me?
I believe you shouldn’t and I endorse Sandy Onie in my stead. When I nominated myself for associate director, I did so with the goal to lift up researchers outside of North America and Europe to become part of PSA leadership. In the days following my nomination, I reflected on what I would do if a capable researcher had been nominated for the same position from outside NA/EU. When I learnt that Sandy decided to run for the position, it wasn’t difficult; as a European researcher with strong North American ties, the relevance behind my self-nomination disappeared. While his expertise in research and outside EU/NA are extremely important for the PSA, I also believe his general skill and his expertise in clinical psychology make him an excellent candidate to further develop the PSA.
I hope that if you had considered voting for me, you will give Sandy your vote.
In Spike W. S. Lee and Norbert Schwarz’ recently published BBS target article “Grounded procedures: A proximate mechanism for the psychology of cleansing and other physical actions” (2020), the authors outline proximal mechanisms underlying so-called cleansing effects. In this blog post, we present a rejoinder to their rejoinder, first briefly discussing the portion of the target article we commented on, then discussing the disagreement we voiced in our commentary on their target article, then we discuss Lee and Schwarz’ rejoinder, and we finish with our rejoinder to their rejoinder. Before anything else, we want to voice our appreciation for Lee and Schwarz’ meta-analytic assessment of the literature and their rejoinder to our voiced criticism. These disagreements are vital to identify stronger versus weaker theories in our science. In this rejoinder to their rejoinder, we make explicit the differences between our appraisal of evidence and theirs and briefly discuss why authors cannot make their set of inferences a “moving target”. All in all, it is clear to us that the empirical foundations for cleansing effects, as Lee and Schwarz present them in their BBS target article, are extremely shaky.
Context of the target article we commented on
In their target article and elsewhere, Lee and Schwarz acknowledge that there are more than 200 experiments on cleansing effects yielding more than 500 effects (Lee et al., 2020). Although Lee and Schwarz acknowledge numerous replications of cleansing effects that failed to find an effect, they argued that several successful replications make it difficult to dismiss cleansing effects off hand. In rebuffing our critique, Lee and Schwarz made it appear that our selection criteria were unclear or that we cherry-picked evidence (see their RA.3); because of this, we repeat here the effects we included, which were entirelybased on their own presentation.
Because they did not give us access to the data underlying their in-progress meta-analysis (see also Explanation 1 below), we identified all publiclyavailableempiricalevidence that Lee and Schwarz verbally presented to rebut replicability concerns. As they were unwilling to share their data that formed the basis of their claims in their target article, it left us in the dark whether their claims were backed up by robust evidence. As a result, we simply took all the studies that theyidentified as successful replications.Unfortunately, their conceptual definition of “replication” was vague. For example, when they try to address replicability concerns, they write: “For example, regarding Schnall et al. (2008), one paper (Johnson et al., 2014b) reported direct replications using American samples (as opposed to the original British samples) and found effect sizes (Cohen’s ds) of .009 and -.016 (as opposed to the original .606 and .852). Another paper (J. L. Huang, 2014) reported extended replications of Schnall et al.’s Experiment 1 by moving the setting from lab to online and by adding a measure (Experiment 1) or a manipulation (Experiments 2 & 2a) of participants’ response effort” and “Yet another paper reported a conceptual replication of the original Experiment 3 by having participants first complete 183 ratings of their own conscientiousness and nominating others to rate their personality (Fayard et al., 2009). This paper also reported a conceptual replication of the original Experiment 4 by changing the design from one-factor (wipe vs. no wipe) to 2 (wipe vs. no wipe) x 2 (scent vs. no scent) x 2 (rubbing vs. no rubbing). Relevant effect sizes were .112 and .230, as opposed to the original .887 and .777.” Thus, to rebut replicability concerns they included both conceptual and extended replications. We thus followed their lead by including these in the p-curve. In one specific instance, this vague conceptual definition meant that our selection led to the inclusion of a different effect than theirs. Specifically, for the following sentences, where they claim a replication: “This finding was replicated with a German sample (Marotta & Bohner, 2013).A conceptual replication with an American sample showed the same pattern and also found that it was moderated by individual differences (De Los Reyes et al., 2012)”, we picked the effect that reported a replication with a US sample, but they include the p-value for a different, moderation effect. We are unsure why they would prefer the moderation by individual differences over the closer replication.
This body of evidence on the replicability of publicly available cleansing effects, selected by Lee and Schwarz, was therefore the focus of our inference as we also stated in our commentary.
Disagreement voiced in our commentary
In our commentary, we thus examined the empirical evidence behind the replication studies that Lee and Schwarz cite as evidence for their claims. Based on the assessment of the evidential value using the p-curve technique (Simonsohn et al., 2014), as well as a data simulation, we concluded the following: based on the evidence Lee and Schwarz lay out in the target article there is a lack of robust evidence for the replicability of cleansing effects and the pattern of data underlying the successful replications of cleansing effects is improbable and most consistent with selective reporting.
The p-curve that we generated based on their own focus on rebutting replicability concerns looks like this:
Lee and Schwarz’ Rejoinder
Lee and Schwarz wrote a rejoinder to our commentary as well as to the other commentaries. We invite you to read their well-crafted response in full (rejoinder, supplementary material). They had a couple of points of criticisms on our approachLee and Schwarz identified a mistake on our part concerning Camerer et al’s (2018) failed replication and mentioned it in Footnote 2 in their Response Supplement. We included them as independent, which they should not have been. However, as the results were non-significant, they did not end up in any of our analyses anyways. Our dataset also incorrectly contained a note saying that Arbesfeld et al. (2014) and Besman et al. (2013) did not disclose the use of a one-tailed test. We are sorry about both of those slips and thank Lee and Schwarz for pointing them out. Both of these mistakes leave the p-curve identical.:
“[Ropovik et al.] draw [their] conclusion [that there is lack of robust evidence for the replicability of cleansing effects] on the basis of a p-curve analysis of a small subset of the entire body of experimental research on the psychological consequences and antecedents of physical cleansing (namely, seven out of several hundred effects)
“…, which included only some of the replication studies and excluded all of the original studies.”
“The procedures they applied to the selected studies did not follow core steps of best practice recommendations (Simonsohn, Nelson, & Simmons, 2014b, 2015).”
“[They] included p-values that should be excluded”
“[They] excluded p-values that should be included.”
As they suggest we made mistakes that ostensibly completely invalidates our conclusion, they conducted a new curve analysis, which looks like this:
These two p-curves demonstrate the completely opposite pattern. While our p-curve shows evidence of selective reporting, theirs shows evidence of evidential value. How can this be?
Rejoinder to Lee and Schwarz’ Rejoinder
Their critique 3 is easily rebuffed, as we simply took what they saw as replications of cleansing effects (and we did provide a disclosure table, see also Explanation 2). Beyond that, we thought it might be helpful to make the assumptions and interpretational consequences of both of our approaches explicit and list the resulting changes to the p-curve data needed to get from our p-curve to theirs p-curve two to address their concerns 1, 2, 4, and 5. We will also articulate why we see our approach as a more adequate way to appraise the merits of a set of published claims – one that is also much more in concord with the substantive inferences drawn in the original studies and the target article by Lee and Schwarz. Here is a summary of the changes to our data what Lee and Schwarz did to arrive at their p-curve from the target article to their rejoinder, which led to the more favorable p-curve:
In their original article, they describe three effects as successful replications: “[this effect was] successfully replicated in two other direct replications (Arbesfeld et al., 2014; Besman et al., 2013)” and “This finding was replicated with a German sample (Marotta & Bohner, 2013)”. Marotta and Bohner (2013) reported a significant effect (at p = .05), which was treated as a successful replication by both the original authors and by Lee and Schwarz in their target article. Yet, for the rejoinder Lee and Schwarz recomputed the p-value and treated it as no longer significant. This is problematic for two reasons. First, they changed the interpretation from target article to rejoinder. Second, based on the available information, it is unclear whether the p-value was truly above or below .05 (as these studies are not published as full papers, there is very little information about the design and analysis). Similarly so, Arbesfeld et al. (2014) and Besman et al. (2013) each formulated one-tailed hypotheses themselves, which Lee and Schwarz inappropriately transformed into two-tailed hypotheses.
Another mistake they made is that they selected a different effect than what was appropriate for the target of inference – evidence on the replicability of cleansing effects. In their p-curve, they selected a three-way interaction instead of what was the replication effect from De Los Reyes et al. (2012). For the three-way interaction (which included an addition of individual differences, which was not part of the original concept of the cleansing effect) a p-value of .021 was reported; for what they cite in their target article as a replication, the replicated two-way interaction had a p-value of .048.
They included three p-values from a single, publicly unavailable conference poster (it was linked in the reference list, but produced an error when visiting the link). As it was not publicly available and Lee and Schwarz were unwilling to share their data, it did not form part of our inference as we clearly stated in our commentary. Nevertheless, after examining their dataset, it shows a N = 10 per cell, all yielding significant and small p-values with extraordinarily – and unbelievably – large effect sizes, equivalent to d = 1.55, 1.49, and 1.84. Furthermore, for their rejoinder, they decided to add Experiment 1 to their p-curve, while in the target article they only considered Experiment 2 as a conceptual replication.
For the full list of L&S’ changes to our p-curve set, see Table 1.
Table 1. L&S’ changes to our p-curve data
Note. Grey color = effects presented by Lee and Schwarz’ target article as successfully replicated and used in our p-curve analysis. Orange color = changes to our p-curve set by L&S. Green color = effects added by Lee and Schwarz. P-values in bold represent the effect set used by L&S. Italicized p-values were common to both analyses.
But let’s try to accept their transformations from target article to rejoinder. If, per Lee and Schwarz, there are two published articles and one conference poster, yielding a mere 7 effects evidencing a successful replication with a median N = 12 per cellThe median N for the non-significant replication effects happens to be 12 times higher, N = 144., we honestly don’t see why L&S denote our argument (“lack of robust evidence for the replicability of cleansing effects”) as being “strong” or even controversial. In fact, even if their p-curve demonstrates an effect, such extremely modest sample sizes with incredulously large effect sizes in only a few studies that they describe in a target article should prompt anyone to investigate more carefully and question the efficacy of the p-curve under such conditions. The meta-analytic techniques we suggested in our commentary allow them to do just that.In our commentary, we critiqued the analytical methods they describe in their target articles, as “both their bias tackling workhorses, fail-safe N and trim-and-fill, are known to rest on untenable assumptions and are long considered outdated”. We reasoned that the authors should instead apply state-of-the-art correction methods like the regression-based (Stanley & Doucouliagos, 2014) and especially the multiple-parameter selection models (e.g., McShane et al., 2016) by default to examine their claims. Such methods can help detect extremely shaky evidence, such as the case for a mere 7 effects with a median N = 12 per cell.
Conclusion
All in all, we have clearly shown here again why there is a lack of evidence for the replicability of cleansing effects based on the evidence Lee and Schwarz present. We again show why the successful replications are in fact consistent with the non-successful replications. The answer to our challenge of the data is not to apply an analysis approach that changes the inference criteria they themselves set forth. Instead, close, pre-registered replications will provide a better answer.
Finally, we wanted to comment on these constantly changing inference criteria. Lee and Schwarz likely did not consider the discrepancies between their article and their re-computation, as well as the addition of a study, important enough to warrant a mention in their response. So, the interpretation from the target article that these effects are evidence in favor of the replicability of cleansing effects remains unchallenged and may continue to misguide readers. What this process illustrates is one of the symptoms of an all-too-common problem – a loose derivation chain from theoretical premises, to statistical instantiation of these premises, to substantive inferences. In such instances, the same evidence can be used as a rhetorical device to support exactly the opposite stances. Such moving targets create weak theories, and rebuff solid critiques of one’s work. In the end, it all comes down to what one considers adequate empirical evidence for a scientific claim.
Further explanations
Explanation 1.
As an additional response to their first critique (“[Ropovik et al.] draw [their] conclusion [that there is lack of robust evidence for the replicability of cleansing effects] on the basis of a p-curve analysis of a small subset of the entire body of experimental research on the psychological consequences and antecedents of physical cleansing (namely, seven out of several hundred effects)”), we wanted to complement our critique by discussing the history of our conversation with Lee and Schwarz.
After reading their target article and before writing our commentary, we asked Lee and Schwarz to share the data underlying their recent meta-analysis of which the conclusions they incorporated in the target article. We strongly believed that their evidence, based on what they described, was not as strong as they claimed it to be. As the meta-analysis was one of the core components of their target article, we deemed independent verification to be of crucial importance. They declined our invitation for independent verification as “the meta-analytic review was still being written up and any quantitative presentation of its results would prevent them from submitting the manuscript to Psychological Bulletin”.
We accepted their refusal to share the data. As we believed their bias-tackling workhorses to rest on untenable assumptions and to be outdatedIn their rejoinder to our commentary, they indicated that our observation that trim-and-fill and fail-safe N are long considered outdated more reflects our sentiments than the standards of the field, because recent meta-analyses published in Psychological Bulletin still employ these methods. We thought this was a pretty funny way of arguing a point. Perhaps the authors missed this part of our commentary, but Becker (2005), Ferguson and Heene (2012), and Stanley and Doucouliagos (2014) have clearly shown these methods to be outdated and we prefer to rely on science over engaging in argumentum ad populum. It is however true, as they state, that psychology sometimes uses outdated methods. For example, while McDonald’s Omega should be used instead of Cronbach’s Alpha in most instances (Dunn et al., 2014; Revelle & Zinbarg, 2009; Sijtsma, 2009), some researchers stubbornly resist from updating their methodology (e.g., Hauser & Schwarz, 2020). Or consider the fact that it has been known for years that sufficiently powering one’s studies is necessary to reduce the chance of obtaining a false positive. Researchers still stubbornly persist in underpowering their research, even years even after the Bem (2011) and Simmons et al. (2011) articles (e.g., Lee & Schwarz, 2014)., we chose instead to assess the evidential value of the replication evidence as Lee and Schwarz present it. There are very few replications of cleansing effects (with only a minority showing success). Being the leading experts in their field, Lee and Schwarz either reported all the successful ones or chose to present a subset that we reasonably thought would be the best ones.
Maybe there are other replication studies with feeble evidence or problematic designs. Maybe there are far more failed replications. We don’t know as we did not receive the data from the authors and we simply analyzed their “qualitative insights” (Communication with original authors, 2020) . On the one hand, that makes it a non-standard way of synthesizing the evidence. But on the other hand, we regard it as a steel-man way to appraise only the merits of the evidence behind the studies that Lee and Schwarz themselves hand-picked as prominent examples of the literature to support the vital auxiliary assumption of their theory – replicabilityThat said, we fully agree that we drew the conclusion about the evidence for the replicability of cleansing effects based on a small (rather tiny) subset of the relevant literature. We regard it self-evident that if the target of inference is evidence of replicability, original studies are to be excluded. Why didn’t we search for all conducted replications? Because the target of our inference was replication evidence that Lee and Schwarz presented as such and because a sizeable proportion of studies were not part of public record. Apparently there are only a handful of studies that set out to replicate an experiment on cleansing effects, and the only ones that seemed successful were severely underpowered..
Explanation 2.
Lee and Schwarz claimed that we didn’t follow best practice because we (1) haven’t put together a p-curve disclosure table and because (2) we did not re-compute the p-values that represent the input for the p-curve. The first claim is simply false. Still, this point detracts from the main point of disagreement. The point of a disclosure table is to identify the target effect in a study to ensure that the synthesized effect was the focal effect of the study. In this case, Lee and Schwarz, not us, were the ones who identified the focal effects in their target article. We just followed their lead. For every individual effect, our table clearly identifies the paper and study it comes from, quotes the text string where the effect is reported in the text, effect size, reported p-value, N for the given test, and the author’s inference whether the effect was found or not. We also coded numerous other data about the measurement properties of the dependent measure.
Regarding the second objection, this is a more interesting disagreement. Of course, we understand the importance of re-computing effect sizes or test statistics for any other ordinary evidence synthesis as we have done elsewhere. Refraining from re-computing the focal results of the studies listed by Lee and Schwarz, and taking the reported evidence at face value was, however, a conscious choice. There were two reasons for that. As we explicitly stated, our goal in this very specific analysis was to appraise the merits of a finite set of empirical evidence, as used by Lee and Schwarz to support their proposed theory. There was no goal to infer beyond that finite set or estimating some true underlying effect size. In such a case, it makes most sense to take the relevant evidence as it stands.
First and foremost, the biasing selection process is not guided by re-computed p-values. Second, few practitioners or members of the public re-compute p-values when they read the conclusions of a study and adjust their reading accordingly. So do few colleagues making decisions about what hypotheses to pursue next or creating theories (just like the one concerning “Grounded Procedures”). In their target article, Lee and Schwarz seemed to form no exception (but now reading their rejoinder, we sometimes wonder whether the target article and the rejoinder were written by a different set of persons).
What’s more, a sizable proportion of significant effects that both – Lee and Schwarz (in the target article) as well as replication authors – presented as successful replications, turn non-significant after their re-computation. Lee and Schwarz likely did not consider the discrepancy between their article and their re-computation important enough to warrant a mention in their response. So, the interpretation from the target article that these effects are evidence in favor of the replicability of cleansing effects remains unchallenged and may continue to misguide readers. What this process illustrates is one of the symptoms of an all-too-common problem – loose derivation chain from theoretical premises, to statistical instantiation of these premises, to substantive inferences. In such instances, the same evidence can be used as a rhetorical device to support exactly the opposite stances.
Further, any evidence synthesis requires that there is at least basic information regarding the study design and analytical approach. In this case, half of the successfully replicated effects came from unpublished studies with no full-fledged empirical paper available. Re-computation of p-values would require taking a leap of faith because critical pieces of information were frequently missing. For instance, the replication authors may have not assumed equal group variances in a t-test (like the re-computation assumes) and instead of reporting the df for the Welsch’s test (not a whole number), they just reported N – 2 as df. The analytic sample size may not have been equal to, e.g., df + 2 in a two-sample t-test. The replication authors might have excluded some participants for legitimate reasons.
Lee and Schwarz also assumed zero effect of rounding the test statistics on the p-value by their re-computation. They further presume that a two-tailed value was always the proper statistical translation of the substantive hypothesis. In some instances, it was not clear what exact statistical model they used and whether it was parametric at all. Lastly, as is obvious from the p-curve, our conclusion was not contingent upon the decision to re-compute the p-values. Namely, we would have arrived at the exact same conclusion even if we had re-computed the p-values – lack of robust evidence for the replicability of cleansing effects.
We think that it is just fair that we gave the authors of the replication studies (and Lee and Schwarz) the benefit of the doubt and take the reported results of inferential tests at face value. So, if an exact p-value was available, in agreement with the authors’ inferences at the given alpha level, and in agreement with the substantive inference made by Lee and Schwarz (as leading experts in the field) in their target article, we took it at face value. Just like the integrity of that replication evidence itself.
Explanation 3
All the above-discussed culminates in our final explanation – the critique that we included p-values that should be excluded, and excluded p-values that should be included. Despite the apparent eloquence of Lee and Schwarz’ critique, we thought it might be helpful for a reader to see a transparent and more detailed presentation of changes to the set of p-values by L&S.
Arbesfeld et al. (2014) and Besman et al (2013) both tested a directional hypothesis for which they found support (p = .030 and .039). They claim that the effect replicated. So do Lee and Schwarz. However, by re-computing the p-value, Lee and Schwarz effectively ignored the fact that the replication authors regarded a one-tailed test as a proper instantiation of the substantive hypothesis. As the biasing selection process functions at a different alpha level for directional hypotheses, the application of the selection model should not force an irrelevant publication threshold. In this case, by forcing a two-tailed test, these effects dropped out of the p-curve set, as this method only includes significant effects. Of course, there are sometimes issues with the use of one-tailed tests in general and in p-curve in particularThese include the general bias towards evidential value and different density in the upper part of the p-value distribution under the alternative hypothesis which is, however, irrelevant in this context., and one can discuss how to deal with them. But more importantly, Lee and Schwarz did not find it sufficiently noteworthy to notify the reader about the disconnect between what is claimed in the target article (“successfully replicated”) and the implication of their reanalysis (“these two effects ceased to be successful replications”).
Marotta and Bohner (2013) is not a part of the public record. The result is publicly reported only in several of the lead author’s (Spike Lee) papers. In Lee and Schwarz (2018) NHB paper, they report this effect as being associated with p = .054. In the present table, the re-computed p-value equaled .0575. However, in their 2018 mini meta-analysis as well as in some other papers (Dong & Lee, 2017; Schwarz & Lee, 2018), it was explicitly stated that the result replicated the original finding. Because it was unclear and the fact that .054 and, e.g., .04999999 is statistically the same effect (Gelman & Stern, 2006), we once again consistently applied the benefit-of-the-doubt principle and regarded it as a significant effect. Namely, it is the substantive inference that practically matters way more than minuscule differences at the 3rd decimal place. Regardless of whether the reader sees this decision as substantiated or not, it is unfortunate that Lee and Schwarz claim successful replication when it suits them.
For the De los Reyes study (2012), they synthesized the wrong effect (F(1, 44) = 5.77, p = .021; page 5) when in fact, the results of the replication study are reported on p. 4, section “Replicating Lee and Schwarz’s (2010) Clean Slate Effects” where the following (attenuation) interaction effect (F(1,46) = 4.14) should have been selected. The former was the moderation by an individual difference variable, the latter the ostensible replication. This focal replication effect is, however, associated with a much higher p-value of .048.
The ultimate game-changer was, however, the inclusion of publicly unavailable data from another conference poster (Moscatiello & Nagel, 2014). Again, the target of our inference was publicly available information and we thus did not include this. Nevertheless, let’s look at these experiments in more detail. First, in their target article, they only considered Experiment 2 as a conceptual replication. Thus, Experiment 1 should not have been included. Nevertheless, they included both Experiment 1 and 2 (where it is not even clear whether the samples were independent), which yielded 4 p-values (.0061, .3613, .0036, and .0006).
Given that all of these p-values were based on an N = 10 per cell design, the effect sizes had to be relatively very large for the three significant effects, with an equivalent of d equal to 1.55, 1.49, and 1.84 (we assume between-subjects design). As an additional note, the latter two are the main effect for a focal reversal 2×2 interaction with an effect size that is so large as to be incredulous, d = 1.66 (np2 = .434). We leave it up to the reader how to judge the merits of such study and the probability of observing 3 such uncommonly large effect sizes using N = 10 per cell in this research domain.
Before publishing our blog post, we gave Lee and Schwarz 1.5 weeks to address our concerns. After posting, they published a response on PsyArxiv (available here and in the comments below). We think that at this point, the reader has sufficient information to judge the replicability of cleansing effects and we will write no further reply. We only note that Lee and Schwarz have now concluded twice that they don’t consider Arbesfeld et al. (2014), Besman et al. (2013), and Marotta and Bohner (2013) as significant, while they considered them successful replications in their BBS target article. We think that at the very least this warrants a correction of their BBS article, as Lee and Schwarz no longer consider them successful replications.
One of the things we will miss possibly the most this pandemic winter in the Northern Hemisphere is gezelligheid [ɣəˈzɛləxɛit]. No real English equivalent of gezelligheid exists; the closest word in the English vernacular – coziness – still doesn’t capture the same feeling of intimacy and belonging. What does communicate a similar sentiment and is more familiar to US ears is the Danish concept hygge [hʊɡə] and the Swedish concept of lagom. Wikipedia describes gezelligheid as “’conviviality’, ‘coziness’, ‘fun’” or “just the general togetherness that gives people a warm feeling”. Perhaps the best description of gezelligheid is the sense of belonging when sitting around a warm fire during a cold Christmas. But gezelligheid undeniably extends beyond the nuclear family; when you are outside in the shops where strangers gather and you experience that conviviality so typical of the current time of the year it is also “gezellig”. None of this will be something we will encounter much this winter.
Gezelligheid, as it may be, is one way to socially cope with a cold and dark winter. Regulation of temperature is a driving force behind feeling close, being intimate, and feeling loved. Across the animal kingdom, the regulation of temperature is crucial to survival. Not being able to regulate one’s temperature leads to certain death. When temperatures drop, animals (including humans) use more energy to warm themselves up. The cost of temperature regulation is often countered by distributing it across kin, a phenomenon called social thermoregulation. Take penguins, for instance. When dealing with the harsh winters of Antarctica, they get together and “huddle” in a circle. Even if the ambient temperature is -40 degrees the temperature at the center of the huddle can rise to a whopping 99.5 degrees.
But penguins are not humans and humans are not penguins. Humans deal with fluctuating temperatures in modern times in many more sophisticated ways than penguins (i.e., relying on central heaters or warm clothes). But this has not been true for most of human history, so the imprint of what we call social thermoregulation has definitely left its mark on human culture: Different languages, for instance, have a different number of temperature terms. Some languages distinguish between cold, lukewarm, and warm; others only have two temperature terms, like cold and warm. And languages differ whether they use metaphors where they combine warmth and affection. In Dutch, for example, you can refer to someone that you are not particularly fond of as “Zij laat me koud” (literally “she leaves me cold”), while in English one can say that you have a fond and warm memory of someone. Out of 84 languages the linguist Masha Koptjesvkaja-Tamm sampled in her research, 52 use metaphors combining warmth and affection – the others do not.
Yet the research we are engaged in cannot (nor should) rival the research currently being conducted on vaccines, where BioNTech received a 375 EUR Million grant from the German government to solve one the worst crises in recent human history to develop a vaccine against Covid-19. Compared to that, our research is on a shoestring budget. That means that the work we have done does not achieve what researchers in other disciplines would call a high Technology Readiness Level, or, better, we don’t have comparable confidence in our work as one can have for a Phase III trial for the Moderna or Pfizer/BioNTech vaccines. Providing concrete advice on how to resolve the lack of gezelligheid is not psychology’s forte, but having an insight into the mechanisms behind social thermoregulation will make you all the wiser on how to make educated guesses to help you deal with the lack of gezelligheid.
During this pandemic winter, many of us will be away from the people we love most. The absence of the physical presence of loved ones deprives us of hugs, physical touch, and feelings of physical and psychological warmth that no amount of Skype or Zoom ever seems to fully replace.
In his forthcoming book, Heartwarming, one of the authors of this editorial (Rocha IJzerman) explores the science of why this is, a field called social thermoregulation. This science points to promising technologies like the Embr Wave, smartphone-controlled warmth-producing bracelet, that seem to provide ways to compensate for the physical and psychological warmth that Skype and Zoom lack. Yet the book stops short of recommending these technologies, or indeed offering any concrete advice at all. Why is this?
One reason stems from the sordid tale of Brian Wansink. A former marketing researcher at Cornell, Wansink excelled at conducting clever-sounding studies on the psychology of eating and packaging them into bit-size, actionable pieces of advice tailor-made for public consumption. He would then pitch this advice in outlets ranging from O Magazine to the Today Show. His work even served as the basis for federal policy in the form of the Smarter Lunchroom Movement. Yet, proper scrutiny revealed a shocking level of sloppiness in Wansink’s research. Although Wansink had spun his research into advice for millions of people and a federal program funded to the tune of $22 million dollars, all these products were built on foundations of sand.
Nor is Wansink’s work the only example. Take the example of research on implicit bias. Defined as a set of fast, relatively automatic mental pairings between social groups and other concepts, the concept of implicit bias was popularized largely due to the efforts of Anthony Greenwald and Mahzarin Banaji. These two researchers developed a test, the Implicit Association Test, and demonstrated it in a press conference in 1998 in which they claimed to have data demonstrating that 90 to 95 percent of people harbor “the unconscious roots of prejudice. The test became an immediate sensation, receiving favorable coverage by the wildly popular science journalist Malcolm Gladwell, NPR correspondent Shankar Vedantam, and, more recently, even then-presidential candidate Hillary Clinton in the rarified halls of a US presidential debate.
Yet a closer look at the evidence on implicit bias reveals a field that is woefully unready for application. The Implicit Association Test is plagued with measurement problems (as are, for that matter, otheralternative measures of implicit bias). For example, too often the same person who takes the test multiple times gets a different result, probably because the test is contaminated by large amounts of measurement error. The relationship between people’s score on the IAT and their actual behavior is also quite weak. Finally, a closer look at studies that ought to be most relevant to policy — those that try to change implicit bias — reveals that this field relies very heavily on student samples, rarely assesses whether the changes persist, and contains little to no evidence that changes in implicit bias lead to changes in behavior.
In Heartwarming, Rocha IJzerman wanted to avoid the mistakes of Brian Wansink, Tony Greenwald, and Mahzarin Banaji. We believe psychology can learn from other research fields, like rocket science and drug development, that excel at translating research findings into safe and effective applications. Rocket science uses a framework called the Technology Readiness Levels to evaluate the state of the evidence behind an application and to guide the research program. Drug development uses a series of phased trials to evaluate both the effectiveness and the safety of a novel drug. As we have seen from the accomplishments of these fields — from putting people on the moon to creating a safe and effective Covid-19 vaccine in record-breaking time — this systematic process works.
Heartwarming judges the science of social thermoregulation using standards closer to those of rocket science and drug development rather than those of Brian Wansink. According to those standards, social thermoregulation, though promising, is not yet ready to serve as the basis of advice and other applications. Our unpublished quantitative review suggests the available evidence on social thermoregulation can support the broad conclusion of Heartwarming: there are intriguing links between the regulation of temperature and social relationships that promise to provide unique insights into the roots of human sociality. Yet social thermoregulation research still largely relies on samples of participants that are very peculiar — college students from Europe and the United States. We also know little about the effectiveness of different dosages of warmth for changing psychology. Nor, as would be routine in drug development, have the technologies based on social thermoregulation research been evaluated in household settings for things like unintended side effects.
Thus, while Heartwarming does present science that is promising enough that it could in the future serve as the basis for advice and application, it does so with caveats. The book therefore notes where social thermoregulation research relies too heavily on unusual and non-representative samples, and it also notes where its findings have not been subjected to the type of big-team, large-scale testing that you might get in drug development.
Psychology may yet initiate the reforms necessary to make it robust enough for application. Psychology researchers (ourselves included) have developed our own frameworks to help evaluate just when psychology evidence is ready for application. Moreover, psychology researchers have created organizations and proposed broad-based reforms to make the big, team-style research more commonplace.
In the meantime, beware psychologists bearing overconfident advice. You are bound to be disappointed.
Progress in psychology has been frustrated by challenges concerning replicability, generalizability, strategy selection, inferential reproducibility, and computational reproducibility. Although often discussed separately, I argue that these five challenges share a common cause: insufficient investment of resources into the typical psychology study. I further suggest that big team science can help address these challenges by allowing researchers to pool their resources to efficiently and drastically increase the amount of resources available for a single study. However, the current incentives, infrastructure, and institutions in academic science have all developed under the assumption that science is conducted by solo Principal Investigators and their dependent trainees. These barriers must be overcome if big team science is to be sustainable. Big team science likely also carries unique risks, such as the potential for big team science institutions to monopolize power, become overly conservative, make mistakes at a grand scale, or fail entirely due to mismanagement and a lack of financial sustainability. I illustrate the promise, barriers, and risks of big team science with the experiences of the Psychological Science Accelerator, a global research network of over 1400 members from 70+ countries.
This quote is often attributed to the person hailed as the “father of social psychology”, Kurt Lewin. In social psychology textbooks, the quote is used to justify a mode of social psychology whereby theories that are developed and tested in the lab “naturally” and “inherently” lead to useful social applications (Billig, 2015).
Here I will argue that theory is not always helpful for solving practical social problems. This is because fixating overmuch on theory can lead to what I call theory blindness, a singular focus on the aspects of the problem that are relevant to the theory at the expense of other aspects that are just as important when the problem is considered in its entirety. I review a case study that illustrates the dangers of theory blindness and close with an argument for a focus on the practical problem as it exists in its original context rather than a focus on any particular theory.
Theory application as a model for practical impact
The dominant model of how social psychology comes to have a useful impact on society, at least as portrayed in social psychology textbooks (Billig, 2015), is what I will call the theory application model. In this model, theory development may be somewhat informed by a particular social problem, but theory testing happens largely in tightly controlled environments, such as the laboratory or an online survey platform. The process proceeds something like this:
The social psychologist develops a theory. The theory can come from many sources: introspection, personal experience, the literature, or perhaps as a response to current events. Although this is rare in social psychology, the theory may even follow a process of formal theory development. These theories may be somewhat informed by a social problem, but not necessarily the problem in its full, original context.
The social psychologist tests hypotheses derived from the theory in tightly controlled environments. The theory is then used to derive hypotheses that, in social psychology at least, are usually tested using experimental methods, most often in the laboratory or a tightly controlled online platform like Qualtrics. The results of these experiments are used to support the theory, refine it, and add boundary conditions. The usual justification for the high degrees of control in this step is that this control is necessary to isolate and manipulate the psychological processes that are relevant to the theory (Billig, 2015).
The social psychologist applies the theory to a practical problem as a way to test its generalizability. Once theory-derived hypotheses have been tested in tightly controlled environments, the social psychologist may choose to conduct a study (for example, a field experiment) specifically designed to address some practical social problem. Usually, this exercise is framed as an application of theoretical principles already established in the previous two steps; the implication is that the study cannot directly refute the theory but merely demonstrate how its principles generalize to new, somewhat uncontrolled and messy settings.
Theory highlights some things and de-emphasizes others
The theory application model of practical impact illustrates how theory might, at least in principle, help solve social problems. Psychological theories provide working models of how psychological processes produce behavior. Insofar as individual behaviors “add up” to produce a social problem, psychological theories can highlight the psychological processes that produce those individual behaviors. Theories therefore also provide ready explanations for social problems and, due to their usefulness in identifying causal mechanisms behind individual behaviors, a guidebook for intervention via disrupting those mechanisms.
For example, the appraisal theory of emotion argues that emotions are produced by a person’s interpretations of events – in other words, their appraisals. This theory highlights these appraisals as a cause of emotion. The implication is that, to solve problems related to widespread negative emotions, such as the emotional fallout of a natural disaster, one should look to identify and change people’s interpretations of the natural disaster. This can lead scientists to develop and deploy specialized measures of people’s appraisals of the natural disaster and apply interventions that target those appraisals. Under the spotlight of appraisal theory, aspects of a social problem related to appraisals become valid observations while other aspects of the problem become de-emphasized.
Herein lies the danger of the theory application model of practical impact: if the theory does not fully capture the problem’s original context, it can leave important aspects of the problem outside its theoretical spotlight. For example, the lens of appraisal theory might cause well-meaning scientists who wish to provide mental health assistance in the wake of a natural disaster to neglect the real mental burdens caused by the economic and social fallout of the disaster. Focusing too much on people’s appraisals of a natural disaster as a cause of poor mental health is a demeaning way to help people who have just lost their homes and loved ones. In addition to leading to misguided forms of help, theory blindness can have real material costs if lives depend on the help’s effectiveness. In the most extreme situations, these costs can include lost lives and livelihoods.
The curious blindness caused by theory is part of a phenomenon that philosophers of science call the theory-dependence of observation. Theories define what counts as a valid observation. Theories can lead people to develop entire instruments dedicated to the measurement of the processes posited by theory (see Greenwald, 2012, for examples). In the context of a social problem, everything outside the measurement paradigm becomes “noise” that is outside the theory’s scope. When the theory-derived measures are deployed to solve a social problem, these “extraneous” factors can therefore be interpreted as irrelevant to the target problem.
I am not the first to note the peculiar dangers of theory for application. Daniel Kahneman called this phenomenon “theory-induced blindness” and linked it to the mental heuristics and biases that dominated his decision-making research. Curiously, Kurt Lewin himself may not have endorsed theory as a route to application (see Lewin, 1931) – at least in the way social psychologists currently go about it. (Even curiouser, Lewin was also not the originator of the quote that leads this blog.)
Despite this history of similar critique, I believe the dangers of theory for application are not widely recognized. I therefore illustrate these dangers with a concrete case study.
Case study on Implicit bias
The concept of implicit bias was developed to explain a particular contradiction in the United States: in large, national surveys racial attitudes appeared to consistently improve from the 1960s through the early 1990s (Schuman, Steeh, Bobo, & Krysan, 1997). Yet, despite these improvements, racial disparities remained more stable than many scholars would like (Lee, 2002). These stable patterns in disparities were reflected in the laboratory, where even participants who claimed to value equality seemed to act unfairly when assessed on “unobtrusive” measures of bias (i.e., measures where, from the participant’s perspective, no particular response could be clearly labeled as “prejudice” or “discrimination”; Crosby, Bromley, & Saxe, 1980). Solving the puzzle of why people’s self-reports contradict their laboratory behavior provided an enticing means of making a practical impact.
Dual process models, such as the “prejudice habit model” (Devine, 1989), arose as an explanation for this contradiction. In these models, people’s beliefs had changed throughout the 1960s up until the 1990s, but a fast, relatively automatic mental process had not. When national surveys asked people how they felt about Black people, those surveys measured beliefs. However, the studies of “unobtrusive” bias measured behaviors that were more influenced by the automatic process.
At first, research in the dual process tradition was stymied by the fact that there was no independent measure of the automatic process. This changed with the creation of “implicit measures”, especially the Implicit Association Test. These measures were important for two reasons. First, they gave researchers a tool that purported to quickly, easily, and directly assess the automatic process (which came to be known as “implicit bias”). Second, they created a “palpable experience” (Monteith, Voils, & Ashburn-Nardo, 2005) that, when made widely available through websites like Project Implicit, greatly enhanced the standing of implicit bias in the public imagination.
The result was an explosion of research on implicit bias and a steady increase in the reach of the concepts into the public sphere. This research included observational studies designed to validate the new class of implicit measures and demonstrate the potential consequences of implicit bias (Cunningham, Preacher, & Banaji, 2001). The research also included manipulations designed to investigate the procedures that could change implicit bias (Dasgupta & Greenwald, 2001).
A funny thing happened over the course of this research on implicit bias: the implicit measures, but especially the IAT, became a target of change in their own right. In effect, the IAT became a stand-in for the problems related to social disparities. Thus, the existence of bias on the IAT became a convenient way of representing those disparities, and demonstrating the presence or absence of change on the IAT became a way of demonstrating which procedures might have promise for changing social disparities.
The result was a cottage industry of interventions and trainings all developed around the concept of implicit bias, especially as represented by the IAT. This has come to a head in the public sphere, where large companies like Starbucks and Delta Airlines have rolled out company-wide training programs focused on implicit or unconscious bias. Governmental agencies have also taken an interest; the New York Police Department has instituted its own training, as has the UK civil service (this program was recently scrapped).
Meanwhile, the evidence is muddled at best that implicit bias does indeed cause real-world disparities, and in some settings there is strong evidence of the importance of non-psychological factors. Take racial disparities in who is subject to police misconduct. In many US jurisdictions, firing problematic officers is extremely difficult because weaker disciplinary oversight is one of the main concessions that police unions have extracted through collective bargaining. This concession makes it difficult to remove officers who are acting out of line, with a disproportionate impact on communities of color. The implication is that reforming systems of collective bargaining might be an effective means of reducing police misconduct, especially in communities of color.
This insight comes not from a particular theory, but from a careful examination of the historical and policy environments of US policing. However, an observer who approaches problems in US policing wearing the blinders of dual process theories might miss this important insight, pursuing instead reforms such as mandatory implicit bias training that do not effectively alleviate the suffering caused by police misconduct.
The problem-solving model as an alternative to the theory application model
The police union example illustrates an alternative means of achieving practical impact that is different from theory application model: a focus on a particular, substantive problem as it exists in its original context. The process that I suggest bears some similarities to action research – a research method initially developed by, ironically, none other than Kurt Lewin (Lewin, 1946). My suggested process goes like so:
Analyze the target setting to develop a definition of the target problem. The first step is developing a careful analysis of the problem setting. This analysis should, at a minimum, draw on a consultation with stakeholders, but it can also draw on historical and policy analysis and pre-existing administrative data (if available). Any and every tool that provides insight into the problem is on the table; this stage may therefore require tools from many disciplines and levels of analysis. At this stage, the problem-to-be-solved may not yet be specifically defined. A major goal of this stage is to integrate the various sources of information to come to a more firm definition.
Select measures of the problem and define success and failure using those measures. A definition of the problem is little help if there are no systems for assessing progress in solving the problem. This second stage involves defining the problem in terms of a measure that can be readily deployed in the problem setting. In some cases, no pre-existing monitoring system exists; in those cases, this second stage may involve devising and implementing such a monitoring system.
Define the universe of interventions you can implement. Not all interventions are within the realm of possibility in all target settings; what is feasible will be constrained by pre-existing policy, law, and available resources. This third stage involves surveying the range of possible interventions you can deploy given these practical constraints. The interventions themselves can come from anywhere, whether from psychological theory or from the careful analysis of the target setting developed in the first step.
Implement one or more interventions with a plan for progress monitoring. The final step is to implement one or more of the interventions identified in the third step with a plan for progress monitoring using the measures identified in the third step.
As in action research, these four steps can form a feedback loop: if the intervention tested in the fourth step is unsuccessful, that can form the basis for a new analysis of the target problem in the first step.
Theories may indeed have a role to play at different stages of the problem-solving model. However, and unlike the case with the theory application model, the orientation of the problem-solving model is on a particular problem in its original setting rather than on a theory and its degree of empirical support. The problem setting therefore serves not just as a venue for demonstrating a particular theory’s generality but as the entire focus of the research process. Moreover, the model embraces the “messiness” of the problem setting’s history, politics, and resource constraints rather than attempting to control the messiness away, as understanding this messiness is critical to developing interventions that work.
Conclusion: Theories are not always helpful for generating practical impact
Personally, I do not dispute the usefulness of theory – if the primary goal is indeed knowledge advancement. If the primary goal is pragmatic, I worry that processes like theory blindness may interfere with a clear-eyed view of the target social problem. For this reason, I believe that a problem-focused approach is a more effective means of achieving pragmatic goals.
Notes:
These ideas were conceived at the Relationship Preconference at the 2021 meeting of the Society for Personality and Social Psychology. Thanks to Farid Anvari, Hans IJzerman, Daniël Lakens, Duygu Uygun-Tunç, Peder Isager and Leonid Tiokhin for their helpful comments on previous drafts of this blog.
During our journal club, we discuss a variety of articles. Some of them are focused on specific topics, but many of them are focused on broader methods (for a full list, see the featured image).
During last meeting (January 22, 2021), we discussed the value of computational modeling. We used Wander Jager’s article “Enhancing the Realism of Simulation (EROS): On Implementing and Developing Psychological Theory in Social Simulation” as the base for our discussion.
Through an example of social simulations, the author asked how we can use modeling in psychological research. The idea behind artificial societies, social simulations and multi-agent systems is simple – building a virtual system where many agents with certain characteristics (such as preferences, behaviors, and psychological traits) interact with each other. The difficulty of implementing the idea increases with the complexity of individual agents.
We reflected on these issues and while we agreed that the tool has a great potential there are still many obstacles that need to be overcome before we can use simulated agents in psychological research. For example, because psychological theories are often too vague at the moment, we are still pretty far from reaching “psychological realism” of agents. This is because often ideas are not sufficiently well operationalized and/or measured, they may only apply to single situations (and do not generalize to others), and are often only tested in a subset of the population. There are also problems related to questionable research and/or measurement practices.
Another obstacle we discussed is the fact that psychologists lack sufficient mathematical and computational training to be able to translate theories into a language a computer can understand.
Finally, we agreed that simulation and modeling are necessary steps to take in the future, but adding another tool to the toolkit won’t solve the other problems we are already facing in psychology. There’s still plenty of work to do before we can reliably use artificial societies in psychological research.
We also identified a few additional resources for those who want to expand their knowledge on the subject:
In our previous post, we argued that the PSA has a grand vision and a budget that cannot easily support it. If the PSA is to fulfill its aspirations, the PSA must increase its funding so that it can support an administrative staff.
In this post, we will assume that the PSA wants to fulfill its grand vision. We will therefore explore ways that the PSA could create the funding streams that are necessary to achieve that vision. We will review four options:
Grant funding
Charitable giving
Fees-for-service
Membership dues
Most of our energy to date has focused on grant funding and charitable giving. A sustainable funding model will require a mix of all the options so that we can balance their strengths and weaknesses. We can think of the PSA’s funding as a sort of investment portfolio where we devote a percentage of effort to each potential income stream based on the PSA’s financial strategy and risk tolerance.
In the remainder of this post, we will evaluate the strengths and weaknesses of all four funding streams. For each funding stream, we will also review the activities that we have already attempted and what else we can do in that category.
The post is thorough and therefore long. Here are some big-picture takeaways:
We have already tried, quite vigorously, to secure grant funding and charitable giving, with modest success
Large-dollar grant funding is high-risk, high-reward and likely cannot be counted on to secure the PSA’s financial future; small-dollar grant funding cannot easily support staff
Charitable giving mostly yields small amounts of money, though donated research assistants could help provide labor for the PSA
Fees-for-service offer some potentially interesting funding streams, including conference organization (continuing the success of PSACON2020) and translation
Membership dues arguably fit the structure of the PSA best out of all funding streams, though we should think carefully about how to implement these without compromising our value of diversity and inclusion
We will argue that the PSA should pursue fees-for-service and membership dues, small-dollar grants, and charitable giving. Large-dollar grants should only be pursued in periods when the PSA can tolerate risky funding strategies. Finally, the PSA will need a bookkeeper and Chief Financial Officer to formalize its finances.
Grant-Writing
Grant-writing involves writing a specific proposal to a formal agency. The agency judges the proposal on its merits and its fit with the agency’s mission and, if the proposal is sufficiently meritorious, issues an award. This is the conventional way that scientists try to fund their work.
We argue that grant writing is high effort, high risk, potentially high yield, but with high volatility. Grant-writing thus makes for bad planning long-term.
Strengths
Grants can be very big. The big granting institutions have a lot of money. For example, one of the grants we wrote in the past three years was for €10 million. This dictum is not always true — many grants are for amounts closer to €1,000 — but when grants pay off big they pay off big.
Many scientists have grant-writing expertise. Because grants are the traditional means through which science is funded, grant-writing expertise is easy to find among the scientific community. This makes grant proposals easier to get off the ground than many other funding activities.
Weaknesses
Grants usually fund projects, not infrastructures. Funding an infrastructure like the PSA requires an ongoing investment, whereas projects complete within a few years. For this reason, granting agencies view projects as lower-risk funding priorities. Most grant mechanisms are therefore project-focused rather than infrastructure-focused.
Grant-writing is high risk.At the US National Institutes for Health, the US’s largest funder, the overall success rate for its most common grant is 23%. This means that, if the PSA relies exclusively on grant funding, we ought to expect 3 out of every 4 grant proposals to not yield any money. It is hard to build a staffing plan for an organization under such risky conditions.
Grant-writing is inefficient. Grant-writing requires so much time and energy with such a low probability of paying off that the expected return may not justify the expected time investment. One computational model suggests that these inefficiencies are inherent to any funding scheme structured as a competition between proposals because scientists waste as much or more effort on the proposals as they would gain from the science that the proposals fund.
The money is not flexible. The PSA wants to use its money for many things: grants to under-resourced labs, paying staff, funding competitions to make scientific discoveries, and many other things. Grants have strict accounting rules that rule out many or most of these activities.
What have we done already?
One proposal to the European Research Council’s Synergy mechanism. This proposal was project-based and very large (€10 million). Some of the money would have gone directly to the PSA, but making the PSA funding work within the constraints of ERC accounting rules was challenging. Writing this grant took years of preparation, including one month of very intense effort by five PSA members. The proposal was rejected.
Two proposals to the US National Science Foundation. Both were somewhat large (~$200K) and may have been able to support PSA staff. Both were rejected.
Two proposals to the John T Templeton Foundation. Both proposals are project-based and somewhat large (~$200K). One proposal was rejected and one is in limbo due to Covid-19.
Two proposals to the European Research Council’s Marie Curie mechanism. Both proposals were project-based and would have supported postdocs (~€120K). Although the postdocs would not have worked directly for the PSA, some of the proposed work would have indirectly benefited the PSA. Both proposals were rejected.
One proposal to Université Grenoble Alpes. This proposal was designed to fund grant-writing for both the PSA and UGA. This proposal was funded and has supported a postdoc at a cost of ~€120K. Although the postdoc officially works for UGA, around half their time is donated to the PSA.
One proposal to the Association for Psychological Science. This proposal was for $10,000 USD. All the money had to go toward participant recruitment for the PSACR project. This proposal was funded.
One proposal to Amazon. This proposal was small ($1,352 USD) and strictly dedicated to paying for server hosting for PSACR. This proposal was funded.
One proposal to the Society for Personality and Social Psychology. This proposal was small ($1,000 USD) and could only be used to support participant recruitment for PSA005. This proposal was funded.
One proposal to the Society for Improvement of Psychological Science. This proposal was small ($900 USD) and devoted to offsetting the costs of applying for nonprofit status. This proposal was funded.
The above grants translate to around $12,862,252 in funds sought and $163,252 received. This demonstrates the risk and volatility in grant funding.
In addition to the above activities, PSA leadership has submitted a large number of letters of inquiry and less formal communications seeking funding opportunities. These did not reach the stage of formal proposals.
What else can we do?
The PSA has explored most of the available options related to grant funding. Thus far, it has had the most success writing small, focused grants. These can support specific activities, such as participant recruitment, but cannot easily support staff.
Summary
Grant-writing for large grants is a high-risk, high-reward activity. Grant-writing for small grants usually requires investment of resources, but also cannot easily fund PSA staff. Grant-writing is also the activity into which the PSA has spent most of its funding energy. The PSA may want to focus its grant-writing activities into smaller, project-focused grants.
Charitable giving
Charitable giving involves contacting individuals to solicit money and other resources. Charitable giving is similar to grant-writing in that some entity gives money to the PSA of their own free will. The method differs from grant-writing in that the giver lacks a structured organization that announces funding guidelines and priorities.
We argue that charitable giving is moderate effort, low risk, low to mediumyield, and medium volatility. Charitable giving allows for somewhat better planning long-term than does grant-writing.
Strengths
Allows direct outreach to supportive members of the community. The PSA has generated a lot of support and goodwill. Direct outreach allows members of the community to express this goodwill in the form of direct financial support.
Unconstrained by deadlines. Many other financial streams, such as grants, come on tight deadlines. Solicitations of charitable giving can happen at any time.
The money is (usually) flexible. Grant funding comes with strict accounting rules. Charitable giving sometimes comes with strings attached, but less often.
Weaknesses
Donation amounts are often small. The cost for staff salary runs in the tens of thousands of dollars. Most gifts run in the tens of dollars.
The overhead for these gifts is often large. This limitation is linked to the first. In contrast to grants, where a payment from a granting agency only needs to be processed once, in a donation drive, payments need to be processed as many times as you have donors. Sometimes these payment processes need to go through a third party, such as PayPal. This can create substantial overhead in both labor and finances.
What have we done already?
Set up a Patreon. At the time of writing, this Patreon collects €200 per month. We have used this money to help fund special projects, such as the PSA001 Secondary Analysis Challenge, data collection for PSA001, PSA006, and PSACR, and other miscellaneous expenses.
Funding drives. We have conducted at least four funding drives in the past three years. These are usually structured around funding a specific project or activity, such as PSACR, and are often announced via the PSA blog. However, we also conducted a funding drive during the first PSA conference, PSACON2020. It is difficult to estimate precisely how much money these drives have generated, but the sum is likely less than $1000.
Direct outreach. We have also done direct outreach to specific people who we have thought might be willing and able to donate to the PSA. Usually we have relied on personal networks for these direct solicitations. This process is smoother now that the PSA is an official nonprofit and can therefore accept direct tax-deductible donations.
Speaking fee donations. A few members of PSA leadership have donated fees they’ve received to speak about multi-site research. A typical speaking fee is around $200. This source of income generates money, but is hard to scale to the point where it would support full-time staff.
Received donations of research assistant time. Once special form of charitable giving is staffing time that is donated from an organization to the PSA. This system allowed the CO-RE Lab, for example, to donate the time of one of its interns to the PSA. This donation enabled, among other things, the creation of the first Study Capacity Report. This system works best when the research assistant or intern can be directly embedded within a PSA supervisory structure.
What else can we do?
The PSA’s new status as a pending nonprofit gives it more options that fit under charitable giving. The option to accept donated time from research assistants is especially attractive — especially if the donation can be formalized in an official agreement between the PSA and the donating organization.
The PSA can also do more direct outreach to members, both through the PSA mailing list and at the time of registration with the member website (an option that bears some similarities to “membership fees”, discussed below).
Summary
Charitable giving is highly flexible and has generated some money for the PSA, though often the specific monetary amounts are small. One attractive option for the PSA now that it is a nonprofit is to accept more donations of time from member lab research assistants. This option works best when the agreement is formalized through an official agreement and when the research assistant can be embedded within a PSA supervisory structure.
Fees-for-service
A fee-for-service venture involves directly selling something, rather as a for-profit company would. Some of the proceeds from sales go toward maintaining the fee-for-service venture; the rest go toward supporting other activities within the organization.
One particularly interesting service the PSA can provide is translation. The PSA has extensive experience translating psychological tasks and measures into multiple languages. Moreover, the PSA has experience implementing the translations in software. Finally, if the PSA can help researchers recruit participants from more countries and cultures, that furthers its mission of ensuring that psychological science is more representative of humanity writ large. We will have more to say on this subject below.
Fees-for-service ventures are difficult to classify in terms of their effort, risk, yield, and volatility; much depends on the specific venture under consideration. Nevertheless, we believe they are underexplored and could usefully complement the PSA’s other efforts.
Strengths
Can leverage already-existing strengths of the organization. If an organization must already be skilled at a specific task, developing a business around that skill allows the organization to leverage its existing strengths to generate a sustainable income. For example, the PSA must accomplish many things to conduct a multi-site study, including develop a large, global community of scientists and researchers around a common mission, coordinate agreements across many institutions, develop materials that are appropriate across many cultures, translate those materials into many languages, create and maintain large databases about its members, and disseminate knowledge about multi-site studies. Many of these skills can be leveraged into an income-generating enterprise.
Builds a sustainable income stream. Once established, a fee-for-service income stream can sustain itself over long periods of time, buffering against the risks associated with other income streams (for example, a grant running out after three years).
The money is flexible. Because the money that comes from a fee-for-service venture is earned rather than donated by a third party, there is no donor who can attach strings to how the money is used. Thus, this money is very flexible and can be used to support staff salaries and any other activities the PSA desires.
Weaknesses
Startup risks. Startups require up-front investments and there is no guarantee that a market will materialize. Thus, any investment made into a fee-for-service venture may, in the worst case, be lost.
Mission capture. On the other hand, if a fee-for-service venture is very successful, that venture may capture the mission of the non-profit as a whole. Thus, fee-for-service ventures work best when they are well in line with the primary aims of the non-profit.
Conflicts of interest. Some fees-for-service ventures may create conflicts of interest for PSA members who work on them, as these ventures may create incentives that conflict with the PSA’s core mission.
What have we done already?
A PSA conference. This year, the PSA hosted its first conference, PSACON2020. The PSA charged a $60 registration fee with a generous waiver policy — the conference was planned on the assumption that ⅔ of the participants would receive waivers. The conference had 62 paying attendees (plus many more who received waivers) and brought in $3,720. This money went towards supporting the salary of the PSA’s junior administrator (and the main organizer of the conference), Savannah Lewis.
What else can we do?
Fees-for-service are a largely untapped opportunity for the PSA. Conducting multi-site studies is difficult and the PSA has developed a broad array of institutional knowledge to complete these studies. Much of this institutional knowledge can be leveraged into marketable services.
The success of PSACON2020 suggests that we could continue this conference in the future. If attendance remains stable or grows, this could become a small but consistent funding stream for the PSA.
A second interesting option is starting a dedicated PSA journal. This option entails some risk, as projects published there cannot be published in journals that are currently very prestigious (such as Nature Human Behaviour); authors on projects therefore take on some personal risk in the decision to instead publish at the PSA journal. It is also unclear whether university libraries would be willing to pay subscription fees for a PSA journal. On the other hand, the success of society journals suggests that once this revenue source is established, it could be a large and reliable one.
One last interesting option is translation. Translation of psychological measures and tasks is difficult, yet the PSA has lots of experience pulling off this task. PSA member Adeyemi Adetula has proposed a PSA-affiliated translation service with a first focus on Africa. This blog contains two surveys, one for people interested in being paid translators and one for people who want measures and tasks translated. If you are interested in this idea as a means of funding the PSA, head over to the blog, read the proposal, and complete one or both of the surveys.
Summary
Fees-for-service involves leveraging the PSA’s strengths to sell a service, generating a new income stream. The PSA has started some limited versions of fee-for-service funding, but this is still a largely untapped opportunity.
In addition to continuing the success of PSACON, the PSA could explore starting a service to provide translations of psychological tasks and measures. Both these activities are in line with the PSA’s core mission — the conference because it builds community and supports members, translation because it provides the translated measures and tasks necessary to recruit participants whose native language is not English.
Membership dues
Membership dues are a special case of fee-for-service funding, but the model fits the structure of the PSA so well that it is worth discussing in its own section. The membership dues model involves charging some amount of money to gain access to some or all of the benefits of PSA membership. This is the model followed by scientific societies, which often gate grant funding and conference participation in exchange for annual dues of, say, $247 (APA) or $237 (APS).
Membership dues also have some overlap with some forms of charitable giving. This is because one way to ask for a donation is at the time of registration through the PSA member website. The major difference between this sort of solicitation and a membership fee is whether the default membership is free or paid. Membership dues do have one major, somewhat hidden, advantage over a solicitation at the time of member registration: due to grant accounting rules, many PSA members who have their own grants cannot spend this money on donations. On the other hand, grant money can typically be spent on a membership fee. Instituting membership dues may therefore unlock grant money that is currently inaccessible to the PSA.
Some implementations of membership dues may conflict with a core value of the PSA, namely diversity and inclusion. If dues are required of all members, they risk pushing out prospective PSA members with less resources. There may be implementations of membership dues (such as a dues structure that allows for fee waivers) that can help mitigate this risk.
We believe that membership dues are low effort, moderate risk, medium yield, and low volatility. They therefore allow for better long-term planning than many other funding streams. However, we also believe that the specific kinds of risks that membership dues incur require feedback from PSA members before we rush into any decisions. We have created a survey to help seek this feedback; we lay out more context for why we see such promise in dues below.
Strengths
Many people have pots of money that cannot be given as gifts. Under the Grant-writing section, we noted that grant funds are inflexible. One consequence of this lack of flexibility is that PSA boosters who have grant funds are often not allowed to use them in charitable gifts because the expense cannot be justified to the granting agency. Membership dues provide a way for these people to justify giving money to the PSA to granting agencies, thus unlocking a resource for the PSA that would otherwise go untapped.
Leverages the PSA’s best resource, its community. The great strength of the PSA is that it has created a large international community (1400 members and counting) that believes in its mission. Membership dues allow the PSA to leverage this strength to better secure its long-term sustainability.
Dues are linked to PSA member benefits. The PSA provides many benefits to members, including a connection with a large international community of colleagues and collaborators, exposure to cutting edge methods, and access to high-impact that can be added to one’s CV. Membership dues would emphasize that the PSA is only able to provide these things by maintaining a large infrastructure.
Precedent in scientific societies. The dues funding model is the same one used by scientific societies, so dues-granted membership should be a concept that is familiar to PSA members.
Wide variety of potential implementations. Dues can be charged on a yearly basis or could be tied to joining a study (the cost of joining would be an administrative fee). The PSA could also consider a variety of waiver policies.
The money is flexible. The money is earned, so it could go toward supporting everything from staff salaries to ad hoc projects.
Builds a stable income stream. Once a membership dues program is in place, the income stream from the program should be relatively stable year-to-year.
Weaknesses
Risk of lowering membership. Gating membership by adding a joining fee could lower PSA membership, draining the PSA of its most valuable resource. Some policies could help manage this risk — for example, the membership dues could be suggested rather than required.
Risk of driving away members with fewer resources. The PSA has a particular interest in increasing its membership in countries that provide less support for the sciences, such as those in the African continent. Imposing membership dues could put joining the PSA out of reach for these prospective members. Once again, policy could help manage this risk — for example, the PSA could provide generous waivers, just as it did for PSACON.
What have we done already?
Nothing.
What else can we do?
Membership dues come with some real risks, so we want to carefully investigate this option before we take any hasty action. We also want to ensure that whatever action we take has the broad support of the PSA membership.
We have prepared a survey to investigate what PSA members think about different models of membership dues. By taking this survey you can help us evaluate the feasibility of using this as an income stream to promote the sustainability of the PSA.
Summary
Membership dues is arguably the funding model that best fits the structure of the PSA. However, some implementations of dues may conflict with the PSA value of diversity and inclusion. Membership dues have great potential as a funding stream, but we should think carefully about whether and how to implement them.
Conclusion
We strongly believe that the PSA has the potential to do enormous good for psychological science, and perhaps the social sciences at large. However, fulfilling this potential will require dedicated funding, and to obtain this funding the PSA may have to look beyond its current focus on grant-writing and charitable giving. Of course, the PSA need not abandon these income streams; it can simply balance these activities with efforts in developing funding streams through, say, fees-for-service ventures and membership dues.
If the PSA does develop a more balanced financial strategy, it will need staff to manage this strategy. This probably means appointing a Chief Financial Officer and a bookkeeper. The Center for Open Science, for example, has staff to manage its financial strategy and its books.
The PSA is at a crossroads. We think it has a real opportunity to grow into a more formal and mature organization that carries the big team science torch for psychology at large.
In one of my projects of my PhD, I – Olivier – am working on developing and validating a scale (the Social Thermoregulation, Risk Avoidance, and Eating Questionnaire – 2, or the STRAEQ-2). In the first phase of this project, we involved people from different countries and asked them to generate items. We did so to better represent behaviors from people across the globe and less so just from the EU/US. This item-generation stage involved 152 authors from 115 universities. In this blog post we present this first phase and share the code we used for this first phase.
Is attachment for coping with environmental threats?
Based on the premise that environmental threats shape personality (Buss, 2010; Wei et al., 2017), the goal of this project is to measure individual differences in the way people cope with the environment and to discover if these individual differences are linked to attachment. Attachment theory postulates that people seek proximity with reliable others to meet their needs (Bowlby, 1969), and the psychological literature suggests that distributing threats on others is metabolically more efficient (Beckes & Coan, 2011). In a previous project findings (STRAQ-1) Vergara et al. (2019) showed something consistent with this: individual differences in the way people cope with environmental threats (temperature regulation and risk avoidance) were linked to individual differences in attachment (Vergara et al., 2019). But the reliability of the scale created and validated in this project was somewhat inconsistent across the countries in which they collected data. We therefore decided to extend the STRAQ-1 findings to make it more reliable across countries, now focusing on:
Fluctuation in temperatures: generating a thermoregulation need (one’s need to maintain internal temperature within a comfortable range in order to survive),
Physical threats: inducing risks avoidance (one’s need to avoid predators or people who want to do harm in order to survive),
Lack of food: requiring food intake (one’s need to prevent from starvation).
We are also dividing each dimension into 4 subdimensions to make the scale more consistent with the attachment literature: sensitivity to the need, solitary regulation of the need, social regulation of the need, and confidence that others will help to cope with the need.
What are the behaviors that account for coping with environmental threats?
We first took the step to generate the items and we wanted to avoid the mistakes of the past. Do people in Peru, China, Nigeria, or Sweden deal with temperature the same way? Probably not. So to have scale items that reflect a diverse range of coping behaviors we asked our collaborators; we designed a Qualtrics survey in which they read a description of each construct and we showed them example items.
In total, 737 items were generated by 53 laboratories from 32 countries. To automatize the procedure, and to avoid copy-and-paste errors, we created an R script to import all the generated items from Qualtrics into a Google Document. We created a text document from the Qualtrics file, including the subscale name followed by every text entry (item) for each sub dimension juxtaposing the name of the country that generated the items. Here is the piece of code that we used:
# this code select the ‘temperature sensibility subscale’ (plus their respective countries) in the data frame ‘items’ that is the qualtrics files containing all the items generated, and write the items in a .txt document called ‘Items.txt’:
library(tidyverse)
items %>%
select(thermo_sens, country) %>%
write_csv("Items.txt", col_names = TRUE, append = TRUE)
# we repeat this for all the subscales, adding them in the same file (Items.txt). For example here we add the solitary thermoregulation subscale:
items %>%
select(thermo_soli, country) %>%
write_csv("Items.txt", col_names = TRUE, append = TRUE)
The lead team imported the list of items (.txt) in an easily shareable document that track changes (google doc) to correct misspelling, reformulate and remove doubles in the list of items (all modifications are available here, and a clean version is here).
How to select relevant and diverse behaviors (items) from a large list?
Because it would be too fatiguing for our participants to answer all of the 737 previously generated items in one sitting (on top of other questionnaires we offer them), we reduced the amount of items to be included in the main survey. We created a diverse advisory committee (including 9 researchers from Chile, Brasil, Morocco, Nigeria, China, the Netherlands, and France); via an online survey this advisory committee rated to what degree they thought the items were representative of their respective constructs. Then, we computed the mean and standard deviation for each item. We selected the 10 highest means and lowest standard deviation items per subscales, and we replaced closely related items (~5 per subscales) to get a wider range of behaviors to be included in the scale. To facilitate the procedure, we created dynamic tables per subscale in an Rmarkdown document. These tables allow you to arrange per score (mean or sd), to select per country, or to search for specific words in the items. Here is the code that we used to generate the tables:
# from a data frame call ‘df’ that contains all the items and all the subscales we compute the mean and standard deviation of the expert ratings:
df <- df %>%
mutate (mean = rowMeans(cbind(expert_1, expert_9, na.rm=TRUE),
sd = rowSds(cbind(expert_1, expert_9), na.rm=TRUE),
mean = round(mean, digits = 2),
sd = round(sd, digits = 2)
)
# then we tidy the data (that is we switch from a short format to a long format) in order to have one items per rows:
df_tidy <- df %>%
gather("expert", "rating", -item, -subscale,
- mean, -sd, -country_list, -country)
# we create a new dataframe including only one subscale (here the sensitivity to temperature) and arrange the data frame having first the 10 highest means and lowest standard deviation:
temp_sens <- df %>%
filter(grepl('temp_sens', subscale)) %>% #select rows "temp_sens" in the subscale column
arrange(sd) %>%
arrange(desc(mean)) %>%
select(item, mean, sd, country)
# and finally we print an interactive table, that display automatically the 10 first rows (this can be change):
library(DT)
datatable(temp_sens)
We also plotted the distribution of the ratings (general, per experts, and per subscales) of the items to detect if there were some issues with specific subscales. Here is the ggplot code that we used to do that:
# first we create a color palette that will be used for the plot:
palette <- c("#85D4E3", "#F4B5BD", "#9C964A", "#C94A7D", "#CDC08C",
"#FAD77B", "#7294D4", "#DC863B", "#972D15")
# and here we plot the rating of the experts per subscales:
expert_plot <- df_tidy %>%
ggplot(aes(rating, fill = expert, show.legend = FALSE)) +
geom_bar(show.legend = FALSE) +
facet_grid(expert ~ subscale, labeller = labeller(subscale = subscale.labs)) +
scale_fill_manual(values = palette)
expert_plot
We then created a world map of the countries that generated the final list of the 120 STRAEQ-2 items, to observe how diverse our list of items was (we ended up doing some replacements for the final project, to increase the diversity of the scale).
# first import the world.cities database from the ‘package maps' in order to have latitude and longitude of countries around the globe:
library(maps)
data(world.cities)
# then you may want to rename some country in your data frame (here my data frame contains the 120 selected items amd is called ‘items_120’) to match the name of the cities included in the world.cities data frame, in our case we needed to rename three cities:
items_120$country[items_120$country == "United Kingdom"] <- "UK"
items_120$country[items_120$country == "United States"] <- "USA"
items_120$country[items_120$country == "Serbia"] <- "Serbia and Montenegro"
# the next step is to merge the desired columns from the world.cities data frame with your data frame by country in order to get the latitude and longitude next to your cities’ names:
items_120 <- world.cities %>%
filter(capital == 1) %>%
select(country = country.etc, lat, lng = long) %>%
left_join(items_120, ., by = "country")
# we compute the number of items that we have per country, this is needed to vary the size of the dots in the final map:
df_country_count <- items_120 %>%
group_by(country) %>%
summarise(n()) %>%
rename(n = "n()")
# we create the data frame to generate the map:
items_120 <- left_join(items_120, df_country_count)
# we create the map (depending on your data you may want to change the radius argument fonction in order to adapt the differences in dot size, and also the value of the fillOpacity argument):
m <- leaflet(items_120) %>%
addTiles() %>%
addCircles(
lng = ~lng,
lat = ~lat,
weight = 1,
radius = ~ log(n + 5) * 100000,
popup = ~paste(country, ":", item, "(", subscale, ")"),
stroke = T,
opacity = 1,
fill = T,
color = "#a500a5",
fillOpacity = 0.09
)
# finally we can print the map:
m
This is where we arrived in the project so far. We are now at the stage to translate the scales for the project into various languages to collect data for the final project.
Interested in participating in the project?
It is still possible for you to join the project. We will ask you to:
Submit an IRB application at your site (if necessary). In order to make this step easier for you, we have written an IRB submission pack, which is available on the OSF page of the project,
Translate the scales that are included in the main project via a forward-translation and back-translation method.
Administer an online questionnaire to at least 100 participants at your site (more is always possible).
Please note that to account for authorship we are using a tiers author list combined to the CRediT taxonomy. We also expect to publish a data paper from the project that will help for future reuse of the data. We project to collect data from approximately 11 000 participants across the globe (we currently have 118 sites in 48 countries), to validate the STRAEQ-2 scale across countries, to measure individual differences in the way people cope with environmental threats, and to explore the links these differences maintain with individuals differences in attachment.
This blog post was written by Olivier Dujols and Hans IJzerman.
In 2017, Chris Chartier shared a blog post that revealed a grand vision for psychology research: psychologists could build a “CERN for psychology” that does for psychology what particle accelerators have done for physics. This “CERN for psychology” would be an organization that harnessed, organized, and coordinated the joint efforts of psychology labs throughout the world to take on large, nationally diverse big team science projects.
The months after Chris’s blog went live revealed that enough people believed in this vision to start building this “CERN for psychology”. These early efforts would evolve into the Psychological Science Accelerator, a network that, according to our recent study capacity report, now spans 1400 members from 71 countries. In these early months, the PSA also collaboratively developed a set of five guiding principles, namely diversity and inclusion, decentralized authority, transparency, rigor, and openness to criticism, that form a coherent vision for the type of science we want psychology to be. We want to help transform psychology to become more rigorous, transparent, collaborative, and more representative of humanity writ large.
Now, three years after its founding, the PSA stands at a crossroads. This crossroads relates to our broad vision of what the PSA is and should be and the means through which we achieve that broad vision. This post will cover the first issue. As we will describe, we believe our early documents point to a vision of the PSA as active, rather than passive, but that a lack of funding streams constrains our ability to achieve that mission.
Minimal and maximal visions of the PSA
Although the PSA was established to coordinate the activities of research labs, there are a wide range of options as to how this coordination is implemented. The specific implementations anchor two radically different visions of the PSA: a minimal vision and a maximal vision.
Imagine a PSA that is radically different from the one we have now: the PSA as a mailing list.
This mailing list contains the contact information of people who are willing, in principle, to participate in team science projects. To use the mailing list, people design a team science study and email a description of the study to the list. People who receive a study invitation through the mailing list and freely reply to the study proposer. The mailing list itself is unregulated, so there is no vetting process for any of the emails people send over it, nor is there any support for the people who send invitations through the mailing list. This vision of the PSA is highly minimal in the sense that the PSA plays very little role in coordinating or implementing team science projects. However, this vision is also very low cost, as mailing lists are cheap to set up and almost free to maintain. In a sense, this minimalist version of the PSA already exists in the form of StudySwap — a useful tool, but not a transformative one.
Now imagine a PSA that is a bit more similar to the one we have now: the PSA as an active implementer of big team science projects.
In this vision, the PSA completes all stages of the team science research process. This means that the PSA takes partial ownership over its projects and participates in project decisions. This includes selecting projects to undertake through a rigorous review process, assisting with the theoretical development of studies, and improving on the design of studies by soliciting multiple rounds of feedback from relevant experts. In this vision, the PSA also actively coordinates and manages study teams to ensure that relevant administrative procedures (such as ethics review) are followed and to ensure that the various stages of the project occur on a reasonable schedule. The PSA also takes an active role in communicating completed projects to the world, perhaps by managing its own journal (the Proceedings of the Psychological Science Accelerator) and through its own dedicated press team. Finally, in this vision, the PSA has a variety of procedures to proactively improve its processes, including novel methods research, team science best practice research, project retrospectives, and exit interviews of PSA staffers who decide to leave the organization. This is the deluxe vision of the PSA, a PSA that is active but that requires lots of money and staff to maintain.
These two visions of the PSA — the minimal and maximal — also anchor an entire universe of in-between visions that are not so extreme. However, what is arguably true is that the vision of the PSA laid out in Chris’s blog post, as well as the one implied the PSA’s five guiding principles, are both much closer to the maximal vision of the PSA than the minimal one.
Money is necessary to implement a maximal vision of the PSA
If we accept that fulfilling the PSA’s mission requires something closer to a maximal vision of the PSA, we need to find ways to build the PSA into this more maximal vision. At a minimum, building and maintaining a maximal vision of the PSA requires people who can do the activities involved in this maximal vision. These people need to be recruited, managed, and retained, otherwise they will work at cross purposes, get into interpersonal conflicts, and burn out. In short, in addition to the people who carry out the PSA’s vision, the PSA needs an administrative structure to help these people carry out their work.
The PSA does, in fact, have a defined administrative structure. We have, for example, defined a set of committees to govern its activities and a set of roles that should be filled for each project. These roles outline an aspiration for the PSA to proactively conduct team science research — further suggesting that the PSA has a maximalist vision for itself.
These roles are many. Our recent evaluation of the PSA’s administrative capacity identifies fully 115 of them. If we assume that each position requires 5 hours per week of work to complete the associated responsibilities, the 115 roles require 29,900 hours per year to staff. Unfortunately, maintaining this level of labor has, at times, been challenging because of our reliance on volunteers who have other daily commitments (such as jobs that pay them).
We can rely on volunteers to carry this load for a time, but doing so carries some real risks. For example, the heavy load can risk burning out the most active volunteers who take on the most labor and lead to large, costly mistakes if the labor requirements for large projects are not met. This load can also provoke interpersonal conflict if active volunteers feel that their labor is not properly recognized or credited. Finally, there are risks associated with who is able to be a volunteer: in a volunteer model, only those who can afford to will donate their time and efforts to the PSA. This squeezes out the voices and talents of some of the very people the PSA wants to elevate, such as those from Africa, Middle America, and South America. Our 2020-2021 study capacity report estimates that 90% of all people involved in administrative roles are from North America and Western Europe. The majority of these, 63% of all administrative roles, are located in North America.
Other growing organizations have managed the transition from an exclusively volunteer organization to one that is funded by some sort of income stream. We can learn from their history, which shows that the organizations that became sustainable did so by leveraging their already-existing strengths.
The first step down the path of creating a sustainably funded organization involves acknowledging that many of the positions outlined in PSA policies are best served, not as volunteer positions, but as paid positions. We must also acknowledge that the costs of paying for this labor may be high. If we assume that all 29,900 hours are paid, and paid at even a very low wage ($7.25/hour, or US minimum wage), we still get a labor cost of at least $216,775 per year. This does not consider taxes, vacation, or other overhead.
Of course, it may not be necessary to pay for all 29,900 hours, either because our staffing estimates are inaccurate, or because our labor becomes more efficient when we switch to a paid model. Yet the mere act of thinking through these considerations requires recognizing that the PSA’s maximalist vision has a financial cost.
What we can be is constrained by our ability to obtain resources
The vision of the PSA outlined in its founding documents is grand. If implemented successfully, this vision could have an impact on psychology that is transformative, creating a science that is more inclusive, more collaborative, more transparent, and more robust.
Yet the PSA cannot realize this vision of itself for free. Currently, the PSA attempts to be a maximalist institution on a minimalist budget. That has worked during the PSA’s early years, but such a model may not be sustainable long-term. If we wish to implement a maximal vision for the PSA, we will need to focus dedicated energy into obtaining the funding needed for this implementation. As we will describe in a follow-up post, this will likely require developing funding streams outside of the traditional grant mechanisms to which scientists are accustomed.
— Funding Note: Patrick Forscher is paid via a French National Research Agency “Investissements d’avenir” program grant (ANR-15-IDEX-02) at Université Grenoble Alpes awarded to Hans IJzerman.
Patrick S. Forscher, Bastien Paris, and Hans IJzerman
How many resources does the PSA possess? This is a question that affects many activities within the PSA — prime among them the annual decision of how many studies the PSA is able to accept. Here are a few other examples:
People involved in the selection of studies must decide whether the PSA can feasibly support studies with special requirements, such as a proposal to conduct a multi-site EEG project or a project involving a low-prevalence (and therefore difficult to recruit) population.
People involved in writing PSA-focused grants must be able to accurately describe the size and scale of the network to make their grant arguments and planning concrete.
People involved in managing the PSA’s finances need to know the people and projects that have the highest financial need.
People involved in regional recruitment need to know how many members are currently located in a specific world region and the number of participants those members can muster for a typical PSA study.
In its first three years, we have had to rely on ad hoc sources to answer questions about PSA resources. Today, with the release of the PSA’s first study capacity report, we now have a source that is more systematic. This blog describes the logic that underlies the report, gives some of its top-level findings, and outlines what we plan to do with the report now that we have it.
How to think about and report on PSA resources
The PSA’s most basic activity is running multi-site studies, and one of the most fundamental resource-dependent decisions PSA leadership must make is how many proposals for these multi-site studies the PSA will accept. Thus, a single multi-site study provides a useful yardstick for measuring and thinking about PSA resources.
The PSA’s newly-ratified resource capacity policy takes just such an approach. It considers PSA resources from the perspective of helping PSA leadership decide how many studies they should accept in a given year. From this perspective, the most basic unit of analysis is the study submission slot, a promise by the PSA to take on a new multi-site study. Study submission slots are limited by at least two types of resources:
Data collection capacity. This is the PSA’s ability to recruit participants for multi-site studies. Data collection capacity is mainly governed by the number of PSA members located in psychology labs throughout the world. However, money can also expand the PSA’s data collection capacity; the PSA has occasionally contracted with panel-provider firms to recruit participants on its behalf.
Administrative capacity. This is the PSA’s ability to perform the administrative tasks required to support multi-site studies. Administrative capacity is mainly governed by the availability of labor, whether that labor be paid or volunteer.
The resource capacity policy also allows for the possibility of study slots that add on special requirements or evaluation criteria. These special submission slots might require, for example, that any studies submitted for consideration to that slot involve EEG equipment. Alternatively, the slots might require that the submitted studies involve investigating the psychological aspects of the Covid-19 pandemic. We will go into more detail about how we think about these special submission slots in a later post. For the time being, we simply note that assessing our resource capacity will allow us to understand the sorts of special submission slots we can accommodate.
The PSA’s data collection and administrative capacities are both in flux. The PSA’s ability to accommodate more specialized types of studies also fluctuates on a yearly basis. Moreover, the PSA is committed to the cultural and national diversity of psychology research — activities that are dependent on its reach in under-resourced countries. Accurate assessment of all these capacities therefore requires ongoing documentation of its members, member characteristics (including country of origin), and its yearly activities. Currently, our documentation happens in a shared Google Drive, Slack, the PSA’s OSF project and the subprojects for each of its studies, and the recently-created PSA member website.
According to policy, these various sources of documentation are consulted in a comprehensive way to form a complete picture of the PSA’s resources. This consultation results in an annual study capacity report, which can inform decisions and activities involving the PSA’s resources.
Findings from the first study capacity report
The first PSA study capacity report is large and comprehensive. Here are some big-picture findings:
The PSA currently has 1,400+ members from 71 countries.
Out of seven studies, six are still underway collecting data.
Based on our past data collection capacity, we have the ability to recruit a minimum of 20,000 participants over the upcoming scholarly year for new PSA projects.
Two out of three PSA members come from North America (24%) and Western Europe (41%).
We do not have sufficient information to accurately estimate the number of administrative hours available for each PSA role.
However, these big-picture findings hide a lot of important detail that may be important for PSA decision-making. For example, here are a few additional tidbits that come out of the report:
The number of PSA member registrations almost tripled as a result of the COVID-Rapid project.
At the time of the report’s writing, and excluding PSA007, the PSA will need to recruit 30,000 participants to complete its active roster of projects.
About 20% of the PSA’s membership have a social psychology focus area.
About 90% of people in active PSA administrative roles are located in North America (63%) and Western Europe (21%).
If you’re interested in digging into more of these details, you can find the full report here.
What’s next?
As outlined by policy, the main purpose of the report is to inform decisions about how many studies the PSA can accept in the next wave of study submissions. Thus, an important next step for this report is for the upper-level leadership to use the report to come to a decision about study submission slots.
However, the study capacity report has already catalyzed a number of ongoing conversations about what the PSA is, what it should be in the future, and how the PSA should go about meeting its aspirations for itself. Some of these conversations have resulted in their own dedicated blog posts, which will be posted to the PSA blog in the next few days.
In the meantime, we welcome your thoughts about the PSA’s study capacity and issues related to it. We believe that compiling this report has been a useful exercise precisely because the process of compiling the report has inspired so many useful conversations about the PSA’s direction and goals. This reinforces our commitment to maintaining this useful reporting structure in future years.
— Funding Note: The study capacity report was made possible via the work of Bastien Paris; his internship at Université Grenoble Alpes is funded by a grant provided by the Psychological Science Accelerator. Patrick S. Forscher is paid via a French National Research Agency “Investissements d’avenir” program grant (ANR-15-IDEX-02) awarded to Hans IJzerman.
Note: We need feedback from potential translators and researchers who need translation services. If this describes you, fill out one of the two surveys below!
Psychological science is dominated by researchers from North America and Europe. The situation in Africa exemplifies this problem. In 2014, just 6 of 450 samples (1.4% of the total) in the journal Psychological Science were African. In Africa, language issues exacerbate the more general problem of underrepresentation; only 130 million out of 1.3 billion Africans are proficient in English, despite 24 out of the 54 countries having English as their official language.
We propose a paid translation service that can help overcome this problem. Our service will translate across many languages, but we will specialize in translations between English and African languages. Such a service can both help local African researchers access English-speaking people as research participants and allow English-speaking researchers to access over one billion Africans (~12% of the world population) as participants.
Furthermore, a paid translation service can help provide resources to translators who may lack resources, such as those in Africa. We expect to draw many of our translators from among the ranks of African researchers whose universities cannot provide them with sufficient salary or research money. Because the Psychological Science Accelerator has both a need for translation and experience with the full translation process, we propose that this paid translation service becomes a formal part of the Psychological Science Accelerator. In the blogpost, we link to surveys for users (typically researchers) and service providers (typically translators) to investigate whether sufficient demand and supply exist for such a service. Because we plan to focus on African languages, the remainder of this post focuses on Africa. However, we invite researchers and translators who focus on other languages to complete the two surveys at the top of the post, as we will also offer general translation services.
Why is there a need for a paid translation service?
A paid translation service can help build a network of translators proficient in African languages and coordinate the translation process. Particularly for the underrepresented and under-resourced African continent, having a paid translation service can help to improve the efficiency of translation, quickly identify and resolve roadblocks to the translation process, and timely delivery. A paid translation service can be likened to a business venture such as what we have as translation service companies and can hold financial, social, and educational benefits.
The Envisioned Partnership with the Psychological Science Accelerator
To ensure the highest-quality content, we envision this paid translation service to be integrated into the Psychological Science Accelerator (PSA) as a Service provider. The PSAis the largest network of researchers in the field of psychological science. At present, the PSA consists of 1021 member researchers in 73 countries. With this huge network of researchers, we envisage a steady demand for the paid translation service. Of course, researchers who are not a member of the PSA should be able to use the service as well. It is unlikely that this initiative can be started without a financial investment. Any business venture requires some risk-taking, and thus, investment, particularly if we want to ensure the involvement of African researchers.
From a business point of view, a paid translation service can be a source of revenue for the PSA. Additional benefits may include network expansion especially to underrepresented populations in Africa thereby promoting diversity and inclusion, creating a pool of experienced translators with which PSA can work, paid (fair) wages to support African researchers, while it can also help improve the generalizability in psychological science. Indeed, a paid translation service can help connect researchers from richer countries (like from Europe and North America) to African researchers and conduct research in currently understudied areas.
The process: users and service providers.
We can think of translation service operations in terms of the supply and demand for translation services. On the demand sideare the Users. These are the buyers of the service provided by translators. They could include researchers, research institutes, companies, and any other entity that is interested in accessing translation service. They are the initiators and managers of these research projects and provide the funds for the translation service. At the other end is thesupply system, Service providers. These are the organizations that provide the operations for the translation services. This is the main supply unit that manages the networks of translators, provides expertise and logistics, and financing.
What’s in it for users?
Users interested in solving generalizability problems in psychology can rely on the PSA’s expertise to conduct crowdsourced studies. For instance, the PSA has already developed a translation procedure employed for PSACR studies. Conducting research of this magnitude required a translation of research materials, especially when considering reaching a multilingual population like Africa. Africa is really, really large and has many local languages. Only relying on English will probably only reach the more elite and higher educated populations. However, unless all translations are just provided for free, this requires money to ensure fairness between users and service providers. With a paid translation service, researchers will benefit from access to a high-quality service from a network of translators at affordable cost and with timely delivery. As a side benefit, such a translation service can help connect users with local researchers to help with data collection.
To understand the demand for this service, we have created a brief (< 5 minute) survey which can be found here.
What’s in it for service providers?
Despite the translation infrastructure at the disposal of PSA, efforts to translate the PSACR research measures, resulted in only 2 (Yoruba and Arabic) of the thousands of indigenous languages in Africa were translated. With a paid translation service, there are benefits for service providers. First of all, fair wages can be provided to African service providers. Second, for those that are interested in participating beyond translation, one can become co-author of high-quality research projects and receive training in the newest open-science practices and psychology concepts.
Envisioned procedure of the PSA Translation Service
How will the translation service work?
The translation service is an integral part of the supply system. These are groups of experts who work directly with the service provider and produce the translated copy of the original materials. This process will require translators, translation coordinators, and implementers. The primary tasks are to translate research materials, recruit and train contacts, and implement translated notes. We also envision that translators can become in data collection if desired by both parties.
Translation will occur in the following steps:
the user provides an English version (or, if available translators exist, other languages, such as French, German, or Dutch) of a study or measures in .qsf, .pdf, doc, or other file format to the PSA.
PSA through its internal working mechanism will identify the area of need for translation and transmits the original document to the translation service.
the translation service translates this file into the target languages, coordinates, and implements the translated survey into the target software.
The translation would be financed by the users/customer. Payment would be made through the PSA to the translation service. Translation services are usually charged per word and can vary between $0.08 and $0.28 USD per word. Aside time spent, number of words to translate, and number of languages to be translated into, translation service that deals with research materials can be costlier. However, to cost these services, we have to consider a number of factors such as validation (i.e., back translation), editing and proofreading, specialty, urgency of the job, scarcity of language translation, language source (e.g., translators from English to other African languages are less scarce than French to any African language), coordination and implementation, and payment to PSA as the service provider.
Call for translators
We are trying to get information about the translation service for translators who are proficient in English and at least one non-English language. We are especially interested in recruiting translators from the African continent, though we welcome the input of translators from other countries as well.This search for information includes costing and availability of translators. We are looking for the opinion of academics, researchers, students, and other interested persons in the fields of the social sciences or linguistics with some experience in translation, as well as professional translators.
We invite people who are interested in translating for payment to fill in our survey here.
October 2, 2020, Eiko Fried gave a talk via Zoom for our department (laboratoire in French) LIP/PC2s at Université Grenoble Alpes, discussing how lack of theory building and testing impedes progress in the factor and network literature. The abstract of his talk is available here and you can see his talk below!
Today, September 16, 2020, we organized the yearly lab philosophy/workflow hackathon of the CORE Lab. To get all the lab members on one page and to reduce error as much as possible, we have a lab philosophy that is accompanied by various documents to facilitate our workflow. But research standards evolve and we also often notice that the way we conduct our research is not as optimal as we would like it to be. As new tools seem to be created daily, we can also not keep up with new developments without hurting our research. And it is important that those who do research daily provide input in the procedures they work with. That’s why we decided on a meeting once a year, where all lab members can have their say in how we are going to work that academic year. The new product is available here.
All lab members (Patrick Forscher, Bastien Paris, Adeyemi Adetula, Alessandro Sparacio, and Olivier Dujols) submitted to Hans in a direct message via our Slack what they like, what they don’t like, what they want to add, and what we should do better (in relation to what is already in the lab philosophy).
Hans gathered all the information, organized them by category (commented on some), and then posted them in a public channel in Slack again.
All lab members got a chance to discuss each other’s suggestions via Slack.
Hans created a Google Doc and organized the information by category.
This morning and after lunch, we discussed points that we were still not entirely in agreement on or that we thought were important enough to discuss in more detail (e.g., should we write a section on COVID-19? Should we include standard R Snippets on our OSF page? Do we think our Research Milestones Sheet is important, and, if it is, how do we ensure frequent updates? Do we want to keep our Box.com account? Do we want to start integrating GitHub for version control of our data and analysis scripts? Do we want to participate in the paper-a-day challenge or do we want to encourage regular reading in a different way? For all of these: how do we ensure adoption as well as possible?). We also went through a list of new tools that we had found via various sources (usually via Twitter, thanks Dan Quintana), and decided which ones we wanted to adopt.
Then lab members claimed tasks they wanted to complete: writing new sections, getting rid of obsolete ones, or adding information to our workspace. All lab members also reviewed our lab canon.
To get a sense of community that we usually miss, we ordered food and ate “together” via Skype.
Once the document was finished, Patrick and Hans reviewed the changes and made final decisions.
Here are some things we really liked (this is a non-exhaustive list):
Our code review section.
Our section on work-life balance.
Our clear communication of lab values.
Our test week data.
Our daily stand up sessions.
Here are some of our major changes (again, a non-exhaustive list):
We expanded our onboarding section.
We changed our Research Milestones Sheets (RMS) and added some tools. For example, by default, we will now use Statcheck to check p-values in our articles and use Connected Papers to help with our literature reviews. We also created a new way to ensure regular updating by integrating the RMS into the weekly meeting agenda of the PhD students (and emphasized that people can earn rewards four times a year if other lab members do not earn theirs).
We already had a journal club and a weekly writing block but did not describe it in the document. This has now been added.
We lagged in regularly releasing releases on our website. We will do monthly releases of our products (e.g., articles, interviews, blog posts), and presentations and appointed lab members who are responsible for checking for updates. Bastien Paris will ask lab members for their updates and Bastien and Olivier Dujols will post the update. Hans will write blog posts to summarize what we release.
We removed information on running studies in Grenoble and moved that to a private component on the OSF.
For the coming year, we will try to integrate Zotero in our workflow. Patrick Forscher will give a brief workshop on how to create and share libraries. Each member of the team will then generate their own libraries, after which we will use that to create various joint libraries.
We decided to add a section on the TOP Factor. In the past, we had a section on favoring open access, but the new TOP Factor allows us to articulate our values more clearly (thanks, COS). Flexibility for non-PIs still exists to submit to journals with high JIF, but we felt that articulating our preference for journals with high TOP Factor communicates our values and hopefully changes matters in the long run.
We got rid of our “sprint planning”. While it sounded like a great idea to specify medium-term goals, in practice they did not work as well and did not add much to the daily standups and weekly one-on-one meetings.
We decided against the paper-a-day challenge, but instead we created a Google Sheet where we all report on one article we read per month (starting with those from the lab canon).
We re-emphasized the importance of using our presentation template and appointed Alessandro Sparacio to ask lab members to send him their presentation to post on our lab GitHub.
The coming two weeks we will also update our Research Milestones Sheet for our existing projects and use Tenzing to clearly specify contributor roles in our existing projects. We still struggle with quite a lot. How for example, can we better discuss our longer term goals outside of journal club and our daily standups? (we feel we miss face-to-face contact where we can discuss outside of work).
Once a month, the CO-RE lab organizes a journal club. Before each journal club, all journal club members have the option to propose one or two articles for the group to read in advance. The articles may be about any topic related to the CO-RE lab’s shared interests of interpersonal relationships, meta-science, and research methods (see our lab philosophy for more details), but any science-related article that will generate a good discussion is permitted.
During the journal club itself, the journal club member who proposed the article gives a short summary and asks some discussion questions. The journal club’s discussions are conducted in English and moderated by CO-RE lab member Patrick S. Forscher.
Because we have moved our journal club online, we have the possibility to invite two underresourced researchers to be part of our journal club for the 2020/2021 academic year. To see the articles we have discussed in the past, you can see our Journal Club Overview Sheet. As we understand that Internet access may be a challenge for underresourced researchers, we will support the two selected researchers with 100 Euros to pay for Internet access (half paid up front, half paid at the end).
There are some conditions to being part of our journal club and receiving the 100 Euros:
You have no grants of your own.
You need the support to be part of international networks.
When selecting the two researchers, we will give priority to those working in countries with a lower GDP per capita (according to the Worldbank’s data).
To apply to be part of the CO-RE Lab journal club and receive the funds for Internet access, we ask you to fill in this form. In the form, we will ask the following information:
One sentence why you want to be part of our journal club.
A commitment to being part of the journal club until the end of the next academic year (2020/2021 by the European Academic calendar).
An article that you would like to discuss during journal club (we want to know your interests!). We are very open to discussing articles outside the North American/European mainstream (see for example this article by Ojiji we discussed).
A cv that you can upload, so that we know your background.
If you participate in our journal club, we will grant you affiliate membership of the CO-RE lab, we will make ourselves available for advice (if you’d need it), and you get access to our Slack. If you have any questions regarding this opportunity, please contact Patrick Forscher (schnarrd@gmail.com) or Hans Rocha IJzerman (h.ijzerman@gmail.com). The offer is open to researchers at all levels (from undergraduate to professor).
To try to get everyone on one page in our lab, upon my arrival in Grenoble I wrote a “lab philosophy”1Maybe “written” is too big a word. The first draft was heavily inspired by Davide Crepaldi’s lab guide, which was in turn inspired by Jonathan Peele’s lab guide.. This lab philosophy is complemented by an OSF workspace that includes some useful R code, shared data (hidden from public view), the CRediT taxonomy to identify contributorship within our own lab, and a study protocol for social thermoregulation. When you want to do (open) science as well as possible, it is important to have some kind of shared understanding. That means, for example, creating templates so that master students, PhD students, and postdocs know what I have in mind when I want to do research. We have created, for example, a template for exploratory and one for confirmatory research.2We know that this dichotomy is overly simplistic, but it helps at least for students to structure their research. We have also started using these templates in our teaching (for example to help master students understand how to structure their project on the Open Science Framework). Creating the lab philosophy also allowed me to outline my approach to research and what students can expect from me.
However, it does not only allow me to communicate what I expect. It also allows the people that I work with to correct the process or get rid of tasks that may seem overly burdensome, not very useful, and too bureaucratic.3After all, I heard that the French don’t like bureaucracies….. Every year in September we get together to revise our lab philosophy, so the current draft is more a collaborative document than my lab philosophy. The goal of this post is therefore mostly to document that process. Here’s what we do:
Each member identifies around three things in the lab philosophy that they do not find useful/outdated.
Each member identifies around three things in the lab philosophy that they find very useful and absolutely want to keep.
Each member identifies around three things that they would like to add to the lab philosophy.
Following this process, the PI (me) collates all the information and puts it up to a vote amongst all the lab members. We then get together4“Together” this year will probably be a Skype call. (usually with good coffee) to update our lab philosophy and to commit to our new way of working for the coming year. This will also means reminding ourselves of things that we didn’t do sufficiently yet5I was rereading last year’s update and see that we did not adopt collaboration agreements, for example, and we need to do a better job updating our Research Milestones Sheet. and things that we think are going really well. This year, we will also write a post to write what we have done during our update.
Perhaps you have a lab philosophy yourself. Or, you have comments/critiques/compliments on our lab philosophy. Post your comments and the links to your lab philosophies here, so that we can collect them, so that we can read them, so that maybe we can steal some of your practices, and so that other people who read this post can find lab philosophies other than ours.
Psychological science should be a truly global discipline and psychologists should be poised to understand human behavior in any kind of context, whether it is urban or rural, developed or underdeveloped, WEIRD (Western, Educated, Industrialized, Rich, and Democratic) or nonWEIRD. To arrive there, we need to ensure that 1) researchers from those different contexts are included, but we also need to ensure that 2) researchers from those contexts adhere to the highest-standards in scientific research. Do we, as African researchers, do enough for the credibility and acceptability of African psychology?
To answer this question, we will first analyze an article by the late Ochinya Ojiji – a well-known Nigerian scholar. We then argue how the adoption of open science initiatives can start to answer Ojiji’s call for greater rigor amongst Nigerian and, more broadly, African researchers. Such greater rigor, in the end, can help ensure we will have an equal and quite relevant voice in psychological science. This voice by Nigerian and other African researchers is essential for the development of more mature psychological theories that are more generalizable across various contexts.
The state of Nigerian psychology according to Ochinya Ojiji
In 2015, Ochinya Ojiji, a Nigerian social psychologist who worked for 28 years in Nigeria at the Universities of Jos and Uyo and at the Institute of Peace and Conflict Resolution in Abuja, reflected on how psychology as a science had fared in Nigeria in the past five decades. In the very beginning, Nigerian researchers were taught by Western psychologists. Though only very few psychology departments had been established during this era, Nigerian psychology was taught as a proper scientific field (and recognised as such) with the necessary facilities to conduct rigorous scientific research (like labs to conduct experimental research). The considerable Western influence meant that early Nigerian psychologists maintained relatively high standards in conducting their research. In the two decades that followed, Western influence decreased. Ojiji remarked that the decline of Western influence meant a rise in various unwholesome practices and activities led by homegrown psychologists. These practices include the proliferation of substandard and profit-oriented local journals, little quality control of curricula at universities, and the under par teaching of senior academics via “shuttling” between neighboring universities, which ultimately affects the quality of training of these students.
Ojiji mostly presents these as observations and no data is reported on the frequency of the various occurrences. It is thus unknown to what extent these practices influenced Nigerian psychology. Despite these shortcomings, we feel comfortable in relying on Ojiji’s experience as a one time Editor-in-Chief for one of the most important journals in Nigeria, as an external examiner to a number of universities, and as someone who has taught in most parts of Nigeria to observe and characterize Nigerian psychology as a “folktale psychology”. To counter the issues he observed, Ojiji called for the NPA, the Nigerian regulating body for psychology (that is, unfortunately, not backed by formal legislation) to improve quality-control standards. Yet at present, from Ojiji’s writings it is unclear how we can implement his suggestions to improve the quality of Nigerian (and perhaps African) psychology.
It should be clear that the unguided practice of psychology as a science in Nigeria has put us off track and is thus hurting the quality of our science. To ensure we again improve the quality of our science and to develop a higher standard indigenous psychology, we can again look to our colleagues from North America and Europe and the recently emerging “open science” movement to help inspire some much needed reform in African psychological science.
Open science: an opportunity for African psychological science
Although African countries and African research are quite heterogeneous in respect to their educational structure and local realities, institutions in individual African countries share some common challenges in African psychology such as lack of international recognition, lack of funding, limited resources and facilities, limited legislative backing and so forth. However, it is important to know that some of the problems Ojiji pointed to have one thing in common: there seems to be a lack of verifiability and responsibility within Nigerian psychological science. This was a problem in North American and European psychology as well and they are starting to fix that problem.
One of the central tenets in the open-science movement is to increase verifiability and responsibility. The UK Royal Society’s motto illustrates this well: nullius in verba (take no one’s word for it). The open-science movement is quickly growing in Europe and North America and this movement presents unprecedented opportunities for Nigerian and African researchers.
Specifically, researchers in the open-science movement make available research articles for free on preprint servers, they share their data and research scripts, while they also create helpful resources to learn how to improve one’s research. What is also interesting to help improve the quality of science is the emergence of Registered Reports, where researchers can submit the method of an article before data collection (and where the report is published without paying attention to significance levels).
We believe that adopting open-science practices can answer some of Ojiji’s concerns and can vastly improve Nigerian, as well as African, psychological science. Participating in the open-science movement arguably presents the biggest potential to level the playing field between North American/European and African psychology. It also offers African researchers a global platform to practice credible science and to help shift the perception in at least some African countries that psychology is not a science. But what are some ways to start practicing open science?
How can African researchers engage in open science
There are various initiatives that African researchers can turn to that we have outlined before and we will provide a (non-exhaustive) list here:
Open access
Preprint servers. There are now various preprint servers available (like AfricArXiv, PsyArXiv), which allow researchers across the world to freely access scientific research. In our lab, the CORE Lab, we submit a preprint to such a server upon article submission (see our lab philosophy here). Preprint servers allow researchers to share their newest work, without a long turnaround from a journal.
Sci-Hub. This website collects the login information from various universities to allow access to many articles that would otherwise be behind a paywall. Sci-Hub is not legal in most countries, so we would never dare to recommend its use, especially in African institutions that cannot always muster the high fees to pay for journal subscriptions.
(Free) tools
The Open Science Framework (OSF). The OSF is free and allows researchers to share their data, materials, and analysis scripts. It also allows researchers to “pre-register” their hypotheses. It further allows for easy collaboration between collaborators across the world (researchers also make available templates for others to build on, like our lab does here).
R and RStudio R is a programming language package used for statistical computation and analysis. It is useful for writing your analysis scripts. It has many advantages over a software like SPSS, as it is free and the way it works allows for much better verification. R studio is a supporting package that greatly facilitates the usage of R. Note that R is not easy, but there are some excellent resources now online to learn it (like this course by Lisa DeBruine, which was translated in French and available here).
GitHub is a free service that supports transparent and verifiable research practices. It allows you to publicly archive research materials, allows for much easier collaboration between researchers, and, importantly, permits good version control.
Code Ocean is a research collaboration platform that supports research from the beginning to when these studies are published. Code Ocean provides you with the necessary technology and tools for cloud computing and built-in best practices for reproducible studies. For example, if you run analyses on a different platform or with a newer R package it is not impossible results vary. Codeocean allows you to directly reproduce the results as planned.
Global networks
The Collaborative Replication and Education Project (CREP) is a crowdsourced initiative that is focused on conducting replications by students. The CREP is a pretty unique learning opportunity for people interested in open science, as it has established templates and extensive quality control from researchers around the world. They would be very happy to support African researchers.
Psychological Science Accelerator (PSA) is a network of over 500 labs from over 70 countries across 6 continents conducting research studies across the globe. With an emphasis on open science practices and different research roles such as test instruments translation, data collection et cetera, the PSA currently presents arguably the most accessible opportunity for African researchers to participate in international studies such as the ongoing COVID-19 rapid studies with some African collaborators (we wrote about the PSA before). Note that current participation from Africa is modest, so this is where there is a real opportunity for African researchers.
ReproducibiliTea. With over 90 institutions spread across 25 countries, this grassroots journal club initiative provides a unique and supportive community of members to help young researchers improve their skills and knowledge in reproducibility, open science, and research quality and practices. You can organise your own version of it.
How North American and European researchers can support African scientists
But without structural changes to the way science works, open science will not yet level the playing field as access for African scholars is still difficult. Some structural changes in the way that psychological science currently operates will also be necessary to support African researchers to become part of the process. We again provide a (non-exhaustive) list here:
Representation in formulating and implementing science policy. At present, there is only one person in an initiative like the PSA on the African continent. For global network initiatives such as the PSA, the representation of African researchers must be engraved in their policies. African researchers should also become intimately involved in implementing these policies (and thus not only be involved in data collection for the PSA). This can start with African representation in the board and in various committees.
The waiving of fees and and communicating that in associations’ policy. Some organizations, such as the Psychological Science Accelerator, allow reduction or complete waivers of obligatory fees. Other organizations (like SIPS, SPSP, APS) should strongly consider offering free memberships to African researchers and make this explicit, as these fees may discourage African researchers that cannot afford these fees.
Recognition of the realities of third-world countries. When doing crowdsourced research or writing research collaboration documents, one must also factor in the realities of third world countries. African researchers are systematically disadvantaged in these endeavors due to their lack of access to the same level of infrastructure. For example, many African researchers do not have sufficient internet to co-write a manuscript on a fast schedule. Collaborating on constructing materials can also be a challenge. Providing African researchers with a few hundred Euros per year to pay for things like reliable internet can considerably reduce this systematic disadvantage.
The training of research collaborators. Due to lower levels of science infrastructure, African researchers do not have the same training opportunities as researchers in other regions. This manifests itself in their lower levels of experience with initiatives such as open science. Providing access to training materials, preferably in indiginous African languages, can go a long way toward reducing or eliminating this training gap.
Dissemination of research. AfricArxiv already exists. This is an excellent initiative and continued support for this preprint server would mean a lot for African scientists. In the same vein, paid journals in psychology should allocate a number of free open access articles for African researchers per year. .
Funding. IRBs in Africa are often not cheap: they require a fee to go through, which is often prohibitive for conducting research. Providing research grants for Institutional review boards (IRB) fee, for data collection expenses, et cetera will go a long way in facilitating the research process and overall success.
Facilitation of research visits to universities. By inviting African researchers to your institute, they can benefit from the facilities at your university and they can become an equal partner in your research process.
Journal audits. Journals should examine how many submissions they receive from Africa and how many articles are accepted, and, if the numbers are low, implement policy to counter that.
Conclusion
African and Nigerian psychology should become a normal part of the research process, if we are to understand humans the world over. Researchers in psychological science have pointed to the need for generalizability and for that to happen, we need to be there. However, in order to get there, Nigerian and African psychologists need to raise their standards and North American and European researchers can support us in achieving our goals. Now is the time to become a vital part of psychological science, as open science presents us with unprecedented opportunities.
There are additional open science resources available that we have missed. Please add them in the comments or shoot me, Ade, an email (adeyemiadetula1@gmail.com). We will update this blog and credit you for your contributions. Additions will be especially helpful as we will translate this blog to various languages.
This blog post was written by Adeyemi Adetula, Soufian Azouaghe, Dana Basnight-Brown, Patrick Forscher, and Hans IJzerman
The coronavirus disease 2019 (COVID-19) outbreak had a massive impact on our lives. The lockdown obliged us to an abrupt change of habits by bringing severe limitations of personal freedoms. The measures taken against COVID-19, such as the lockdown, may well affect people’s mental health. A general population survey in the United Kingdom (with over a thousand people) revealed widespread concerns about the effects of the current situation on their levels of anxiety, depression, and stress (Holmes et al., 2020; Ipsos MORI, 2020).
If the lockdown affects you in a way that puts you at risk for developing mental health issues such as anxiety, depression, or excessive stress, you would probably want to find a strategy to regulate those states. One way to do so that seems especially suited for the current situation, as it does not require large spaces and can be practiced comfortably at home is self-administered mindfulness. Self-administered mindfulness is a type of meditation that consists in increasing the attention to and the awareness of the present moment, with a non-judgmental attitude (Brown, & Ryan 2003). Most awareness exercises like self-administered mindfulness are based on the same idea: each time the mind wanders, the attention is gently brought back to one’s breath or bodily sensations.
Usually, mindfulness interventions are parts of large programs (which can last 8 weeks) requiring the presence of a qualified instructor. However, self-administered protocols can be engaged in via self-help books, smartphone apps, computer programmes; you don’t always need to be with others to learn and practice mindfulness. These interventions share features such as a non-judgmental attitude and an acceptance of inner experience with other mindfulness protocols. In contrast with other protocols, however, self-administered mindfulness does not require the presence of an instructor, is available 24/7 and is less costly (Spijkerman, Pots, & Bohlmeijer, 2016). Some studies suggest that self-administered mindfulness improved symptoms of perseverative thinking, stress, and depression for a group of students (compared to a passive control group; Cavanagh et al. 2018).
Despite the study by Cavanagh et al. (2018) yielding a positive result, it is still uncertain whether there is actually evidence supporting self-administered mindfulness. It is no secret that the world of many sciences, including psychology, is affected by some “viruses” that infect the quality of our science in many ways: publication bias (the likelihood that positive results have a higher probability of getting published) and questionable research practices (which is generally used as a term to describe various techniques to obtain significant results that may not actually represent valid evidence). And this can be consequential. Fanelli (2010), for example, estimated that psychology’s and psychiatry’s published findings contain over 90% positive results, a statistical impossibility as the literature is not sufficiently powered to detect findings at that rate. This means that the psychological literature is very likely to contain unreliable findings and that findings that are stable are likely overestimated in their “effect sizes” (a statistical concept that reflects the magnitude of the phenomenon of interest).
The relative penetration of these viruses into the self-administered mindfulness domain is uncertain. We are far from saying that self-administered awareness interventions are useless, but some caution is needed before we can say that self-administered awareness has demonstrated efficacy in reducing people’s stress levels. A systematic review of literature or a statistical summary of the findings (i.e. a meta-analysis) are necessary to assess the extent to which this strategy is affected by publication bias and to provide an estimate of the true effect behind this type of meditation. We have reasons to believe that existing meta-analyses (e.g., Khoury et al. 2013) did not do this correctly (see e.g., this post by Uri Simonsohn if you are interested in why)1Alternatively, a Registered Report (a type of research protocol in which the manuscript is pre-registered and receives peer-review before data collection) could help identify the efficacy of self-administered mindfulness. A Registered Report is the best “vaccine” against the research problems we outlined before for a twofold reason: 1) The editor commits to publish even non-significant findings (discouraging the use of QRPs) 2) If also negative results are published, it is possible to compute a reliable estimate of the effect size of interest. Even in this case, a registered report that investigates the efficacy of mindfulness in regulating level of stress is missing..
That means that, at the moment, the answer to the title is: we don’t know. As an intervention, self-administered mindfulness currently has a low “Evidence Readiness Level”: we have no way of knowing whether self-administered mindfulness is a reliable intervention against any stress, depression, or anxiety that people may be experiencing as a result of the current situation. And even if we detect that self-administered mindfulness has worked in the populations that were tested, it is also pretty likely that we don’t know how this works across the world: the intervention has not yet been tested across many different populations (and I know of no research that tests it during a global pandemic).
To investigate this conundrum, part of my PhD project aims to shed light on the potential use of self-administered mindfulness for stress regulation and the affective consequences of stress. We will employ a stringent analysis workflow (including multilevel regression-based models and permutation-based selection models) to test for publication bias in various ways. In other words, I will be able to let you know shortly whether self-administered mindfulness has the benefits for emotion regulation that it currently claims to be having. Although I would have loved to have given you an answer with greater certainty, it is simply too early to tell. As a scientist, I would be remiss to say otherwise.
This post was written by Alessandro Sparacio & Hans IJzerman
Starting in October 2019, I – Olivier – have gone to the Netherlands twice to record the peripheral temperature of partners in couple therapy. In a previous blog post, I explained the basic dynamics of romantic relationships and how couples can enhance their feelings of connection and safety through Emotionally Focused Therapy (EFT). In this blog post, I will discuss how and why we investigate temperatureresponsiveness during so-called Hold Me Tight weekends to further enhance connection and safety in relationships.
What is Hold Me Tight?
Let’s first explain the concept of Hold Me Tight (HMT). HMT are short versions, either online or in-person, of the EFT protocol (which I described in the previous post). After HMT, couples typically proceed into a longer and somewhat more formal protocol of EFT. In-person HMT is a program that lasts 3 days (over a weekend), and usually involves about 10 couples who are supported by therapists trained in EFT. The HMT weekend is standardized so that the program is similar across couples and across time. The HMT program starts with an introduction aimed at understanding love and attachment as it is understood in research. Then couples go through the following seven chapters:
Recognizing Demon Dialogues: During this conversation, partners identify the negative cycle they enter when arguing (e.g., one is blaming and the other is emotionally closed). Identifying the root of the problem is the first step that will help them figure out what each other is really trying to say.
Finding the Raw Spots: Next, partners learn to look beyond their immediate impulsive reactions. They discuss and exchange about their negative thoughts (e.g., “When you say that, I think you are going to leave me”) emerging during their negative cycle.
Revisiting a Rocky Moment: This conversation is about defusing conflict, and building emotional security. Partners analyze a specific conflicting situation using what they learn during the previous two conversations.
The Hold Me Tight Conversation: During this conversation partners practice how to be more responsive to each other. They learn how to be more emotionally accessible, more emotionally responsive, and more deeply engaged with each other. They talk about their attachment fears and try to name their needs, in a simple, concrete and brief manner. This is usually considered the main conversation of the weekend.
Forgiving Injury and Trusting Again: In order to offer forgiveness to each other, partners are then guided to integrate their injuries into the couple’s conversations. To do so, partners reminisce and discuss moments when they felt hurt.
Bonding Through Sex and Touch: Partners discover that emotional connection creates great sex, and that, in turn, a more fulfilled sexual life increases their emotional connection to each other. They discuss what makes them want to have sex, or not, and if they feel secure having sex, or not.
Keeping Your Love Alive: In this last discussion partners are looking into the future. After understanding that love is an ongoing process of losing and finding emotional connection, couples are asked to plan rituals in everyday life to deal with their negative cycle. They summarize what they did during the weekend, talk about their feelings, and discuss how they will implement in their daily lives what they have learned.
These chapters focus on how couples can consciously recognize their attachment dynamics. But what if there is more about attachment?
Recognizing our inner penguin dialogues: why temperature may matter for partner responsiveness
In the previous post, we talked a bit about the research John Gottman and his colleagues did on “coregulation”. We have taken a keen interest in this concept, but we depart from a radically different assumption: we try to understand if and how partners’ temperature regulation influences their feeling of safety. To investigate this, I regularly travel to the Netherlands to visit Berry Aarnoudse and Jos van der Loo who organize HMT weekends. During those weekends, I record both partners’ peripheral temperatures.
The immediate objective of this ongoing research is to link peripheral temperature recordings to the participants’ answers to psychological questions asking about their emotions, their feelings of safety, and their perception of the dynamics in their relationships. We suspect the partner’s individual responses to the questionnaire to be related to “signature” variations in peripheral temperature at the couple level.
The procedure for this study is as follows. Before I go to the Netherlands, I typically ask couples registered for the HMT if they are willing to participate in a study investigating how attachment dynamics in couples relate to temperature fluctuations. I kindly ask interested couples to fill in (independently to each other) an online questionnaire that assess relevant psychological and emotional variables, such as: responsiveness to the partner, feeling of security in relationships (in relation to attachment theory), and willingness to be close to their partner. The answers are of course anonymous; part of this is emphasizing that the other partner will never have access to their partner’s response (which can be a challenge during the time of open science!).
One of our lab members wearing the ISP131001 sensor with the app on the right.
To record peripheral temperature, we rely on a sensor (ISP131001) former members of the lab have validated in earlier work (Sarda et al., 2020). This wireless small sensor is placed on the tip of people’s fingers and linked via bluetooth to a smartphone application developed by our lab that records and store data on our server1This mobile application is being developed by CO-RE Lab and the code is open source. You can find it on our GitHub repository here. Don’t mind using it for your own project; please keep us up-to-date if you develop a new version.. The device is very light, which allows wearers to carry out their everyday activities almost just as normal (see above)2Since people wear the device all day long, we provide disposable gloves so that people can go to the bathroom while wearing the device. Also, while our device seems to work pretty well for temperature measurement, the design is not really user friendly. So, feedback from users, and from partners during the HMT weekends is something we incorporate. For example, during previous HMT measurements, people’s feedback allowed us to improve comfort while avoiding as much as possible the breakage of our (very fragile) material.. At the start of the weekend, I give each partner a smartphone (to be kept in their pocket) and I put one temperature sensor on each person’s finger. At the end of the day, I remove the sensor from people’s fingers. So, throughout the entire program we record people’s temperatures. The data of one couple looks like this:
The aim of this project is to understand thermoregulation mechanisms in order to help therapists and couples to improve receptivity between partners. But we cannot do that having “only” data from couples in therapy. This is why, simultaneously, we are recruiting couples in the general population of Grenoble in France. We know from previous studies conducted by our lab that these couples tend to declare being very satisfied with their relationship. We don’t really know why, but having data on couples from HMT and from the general population will help us identify cues to develop interventions for couples that report having lower than average relationship quality. Having this variety will allow us to understand – via deep learning – how peripheral temperature variations between partners are related to partners’ scores to psychological variables (their answers to the anonymous questionnaire). Because we are focused on helping therapists and couples, we intend to develop in the future an algorithm that will manipulate peripheral temperature which we hope will improve partners’ responsiveness. In the end, can we add another chapter to the Hold Me Tight weekend? Our data will tell.
Conclusion
Just before the beginning of the lockdown in France, we decided to stop collecting data in order to protect our participants’ health. But social isolation or being confined together makes the research even more relevant. The lockdown was recently lifted in France, but working from home remains the norm. Couples thus spend more time together at home than ever. Because we believe that the results of this study could help us to understand and improve intimate partner relationships, we are planning (adhering to the COVID-19 prevention measures in place) to resume the study data collection. For every 60 couples that participate, we raffle off some awards (e.g., an iPad). As we now send the sensors via postal service, people all across the European Union can participate. If you are interested in participating in the study, please contact us at corelab.grenoble@gmail.com. If you are a therapist and want to help measure temperature during HMT weekends, please shoot us an email as well.
This blog post was written by Olivier Dujols and Hans IJzerman.
I – Olivier – am a PhD student. My research is in social psychology. However, the end goal of my thesis is to improve how responsive couples are towards each other after they go through relationship therapy. Diving into relationship therapy is a big step for a research-focused social psychologist. To try to improve partner responsiveness, I try to identify the psychological and physiological mechanisms that constitute partner responsiveness. This is part of a series of two blog posts in which I explain my research. In this first blog post, I explain the basic dynamics of romantic relationships from the perspective of EFT and how therapists currently help improve them. In the next blog post, I will discuss how and why we investigate temperatureresponsiveness to reach our goal.
Attachment dynamics in romantic relationships
People’s attachment orientations are important for how people engage and maintain their relationships. In early life, people “regulate” their relationships by screaming, crying, hugging (Bowlby, 1969/82). Such attachment behaviors are ways to increase closeness to the caregiver and can help the infant signal threats (such as cold, any type of risks, or starvation) from which it seeks protection. When the caregiver provides that protection, it serves as a secure base from which the infant can explore its environment. Mary Ainsworth built on Bowlby’s (1969) work by identifying that infants’ attachment style may differ from each other, developing a method (the Strange Situation) that helps psychologists identify how the infant is attached. When they were first discovered, “attachment styles” were divided into three categories: A (Avoidant), B (Secure), C (insecure/ambivalent). Another category: D (disorganized) was later added to these three (Main & Solomon, 1986).
These attachment styles transfer, at least to some extent, from relationships with parents to relationships with romantic partners (Fraley, 2019). While for children, caregivers are the main source of security, this is often a romantic partner for adults. In the social psychological literature, we usually measure people’s attachments by asking them about their romantic relationships. The Experiences in Close Relationships (ECR) scale is currently the best validated measure of attachment in adulthood (Fraley, Heffernan, Vicary, & Brumbaugh, 2011) and relies on statements like “I prefer not to show my partner how I feel deep down”, and “I often worry that my partner doesn’t really care for me”. People indicate how well each statement applies to them on a scale ranging from 1 – strongly disagree to 7 – strongly agree.
People’s attachment in their romantic relationships is scored on two continuums: from anxious to secure and from avoidant to secure. If you score low on both, you are pretty secure in your romantic relationships (if you want to test how secure you are, you can do so by going here). A person with a high score on anxiety will more frequently try to seek closeness with their partner, but will also often feel like they can lose them at any time. In contrast, a person with a high score on avoidance will less frequently try to seek closeness with their partner, and will prefer not to rely on their partner in stressful or threatening situations. People who are more avoidant are more likely to distance themselves from potential threats and disengage from their emotional reactions. In contrast, people who are more anxious tend to focus on stressful situations, which exacerbates their stress, increases their negative moods, and anxious thoughts.
How attachment theory is connected to therapy: Emotionally Focused Therapy.
Such attachment dynamics can certainly play a role in adult romantic relationships. Humans are, after all, social animals; we need connection and safety throughout our entire life. But despite this necessity, we don’t always know how to connect in our relationship. This is why some couples seek therapy. Emotionally Focused Therapy (EFT) for couples relies on a brief protocol therapy developed by Sue Johnson (2004) that is based on principles from attachment theory combined with a humanistic, and systemic approach. EFT focuses on how people experience their love relationships, and on repairing adult attachment bonds (Johnson, 2004, 2013). Specifically, the goal of the therapy is to create positive cycles of interaction between partners, so that individuals are able to safely ask for and offer support to their partner. Knowing how to be responsive to one another in turn also facilitates the regulation of interpersonal emotions.
EFT is an empirically supported, 8 to 20-session therapy (Wiebe & Johnson, 2016). A meta-analysis shows some empirical support for EFT couples therapy (Johnson et al., 2006). Studies for example have shown that the EFT protocol can be effective for stress management in couples (Lebow et al., 2012) and for increasing couple satisfaction (Denton et al. 2000).1EFT therapy is evidence-based and we like the idea of EFT. We believe in the principles of the theory and the therapy. But that the therapy is evidence-based does not mean we are not critical. The entire field of psychology has been faced with a replication crisis and that means that many of our most precious findings are uncertain, as has been evidenced by many studies (see e.g., Klein et al., 2018). However, the replication crisis in psychology has taught us that the results of empirical studies are not always replicable (that is found again when the conditions in which we found the first evidence are reproduced identically). This is partly due to small sample sizes in our studies and publication bias (which in turn is caused by an overrepresentation of positive – significant results) and a lack of pre-registration. It would be surprising to us if EFT has escaped the crisis, because doing high-quality research is incredibly hard! In our lab, we only consider research that has been done via “Registered Reports” to provide stronger evidence and we are not aware yet of any EFT research that has been conducted via the Registered Report route. Therefore, we think that studies that show positive effects from EFT should be replicated prior to being convincing, at least to us. Effectiveness findings should be replicated so as to provide stronger evidence (and ideally, they should follow these Evidence Readiness Levels. We are not sure yet whether the EFT work is at ERL3. We have not done a systematic review on EFT).
EFT follows three basic stages, in which the couple engage in conversations. During Stage 1, “De-escalation”, both partners mindfully observe their pattern of interaction during conversations. Sue Johnson calls the negative emotional cycle partners are caught in “the dance”. From an attachment point of view, they may discover that the negative cycle creates feelings of abandonment and rejection. During that phase, the purpose is to discover that these feelings are a common enemy and that they can help each other step out of it. During Stage 2, (“Restructuring the bound”) arguably the most powerful conversations take place. These conversations are also called the “Hold Me Tight conversations”. Partners discover and share their attachment fears in ways that allow the other to offer reassurance and safety. Then partners express their need to create deeper emotional responsiveness. When they express that need, they are ready to move to Stage 3, which is the “Consolidation of treatment gains”. The couple there examines the changes they have made and how they have fixed the negative cycle. The therapist supports them in looking to the future, and helps them to reflect on how they achieved greater responsiveness.
During the therapy and during each session the first step for an EFT therapist is to help the couple focusing on the present process by asking “what is happening right now?”. By letting the partners focus on the present, the therapist puts the emotions of both partners together so that they focus on their interaction. The second step for the therapist is to help deepen the emotions, by for example asking what happens when they see tears on their partner’s face. The focus on connection and on the partner’s need can help create a new interaction that is really based on the attachment dynamic. The next step is to have one partner express what happens (e.g., “I don’t trust him/her”) when faced with the other. This helps the couple process the new step in the interaction. The therapist can then help identify what the expression of this sentiment does to the other (e.g., by asking, “What did it feel like to tell him/her that?”). The lack of being able to express one’s primary emotions are often based on demand and withdraw dynamics in relationships. Acknowledging the fear of abandonment helps to break the negative cycle. At the end of each session the therapist points out how well the couple did this process. The therapist tries to repeat and recreate these dynamics during the session at various levels of intensity.
If you are interested in EFT, you can find training tapes, research on EFT, and where to do the basic training on their website.
Conclusion: patterns of responsiveness in mind, but also in body.
The EFT protocol helps us understand how important discussions are in relationships to create deeper emotional interactions. But saying to your partner that you fear abandonment is not the only part of partner responsiveness. As it may be, John Gotmann and his colleagues have spent several years on understanding how people are also physiologically tied to each other. He found that couples that regulate each other physiologically are more likely to stay together.2Note, again, that this is in the Era Pre-Replication Crisis (19 years EPRC), so we are unsure about the strength of the evidence.
Where Gottman often focused on heartbeat, we are focusing on temperature. This may not be so intuitive for humans. A pretty easy way to understand this is when thinking of penguins. Penguins, when they get cold, huddle with each other (see a timelapse video here) to drastically increase the temperature inside the circle. Even if it is -20°C or below in the environment, inside the huddeling circle the temperature can raise up to 37.5°C (Ancel et al., 2015).
We (the Co-Re Lab) think humans do the same things, although it is true that in modern times we often regulate temperature often without each other. And yet, questions that concern people’s desire to warm up with one another correlate reliably with whether people want to share their emotions with their partner (or not; Vergara et al., 2019). And we suspect such regulation of basic needs is at the core of our partner responsiveness. In a next blog post, we will tell you how we investigate this during relationship therapy.
This blog post was written by Olivier Dujols and Hans IJzerman
With the Covid19 crisis our life and our habits have completely changed. At least for a foreseeable amount of time, it will not be feasible to attend courses in person. As a result, many people are starting to move their courses online. If you want to publish your courses online, there are several ways to do it. In this post, we will show you a pretty simple solution via GitHub.
Step 1: Create a GitHub repository
In order to have your manual hosted by GitHub, you need to do two things: 1) create a GitHub account and 2) create a repository (a space of “memory” where your files will be stored). If you are new to GitHub, you can find more detailed instructions on how to work with it here. To show my example from a course I (Alessandro) posted (a translation of an R Manual), you can view this repository and this post (warning, it is in French). Some more detailed instructions on how to work with GitHub can be found in this presentation I did in our social cognition group (“the axe”). You can download it here.
Step 2: Convert your Google Doc to a Markdown file
In order to be displayed by GitHub, your files should be in Markdown format (if you use any other format, the preview of your files will not be available). Let’s assume that you are working with a Google Doc that you want to convert it to Markdown format (I can recommend this, as there are some built-in solutions).
You will first do the conversion by following these instructions:
Open a Google Doc from your Google drive that you want to convert into a Markdown file.
Click on Tools → Script Editor (you can only do this if you have the rights of modifying the document).
In the Code.gs window you will find this line of code:
Delete it, and copy and paste this code. Credit for this script goes to Mangini.
Save the new script.
Once you have saved the script, there will be a dropdown menu with the title “MyFunction” (as you can see in the image below).
From that dropdown, select the function “ConvertToMarkDown“.
Click the “Run” button (the first time you do that, you will need to give authorization)
The Google Doc has now converted into “.MD” (Markdown) format. It will be sent automatically to the email address associated with your Google account (with all the images from your Google Doc attached).
Step 3: Fixing the converted file
Conversion works well in some cases, in others it does not. For example, some tables are converted well, others a little less. So, check through your new file before you post it. In any case, if you have images, unfortunately you will have to add them by hand. To add images and fix the code there are some valuable tools. I used two: Atom, which is a text editor and Dillinger, which allows you to see the preview of the file in .MD format. To add images use the following line of code:
This way you will add in the Markdown file the image called image_0.png. It is good practice to organize images in a specific folder, in order not to leave them spread out in your GitHub repository. To refers to images in a specific folder, you can use the following line of code, that must be added in the Markdown file:
To unpack this:
image_0.png refers to the name of the picture
“10” is the subfolder of the folder named “images”.
It’s good practice to call the subfolder with a numeric value corresponding to the numbering of the chapter (i.e., 1 for chapter one, 2 for chapter two, et cetera). Thus, with this line of code, you are displaying in the markdown file, the image_0.png, located in the folder “10”, which is located in the folder named “images”. The example outlined above, will look like that in the GitHub repository.
This the line of code for the course I posted:
And this is what was displayed for that part of the course:
Sometimes during the conversion, some images are lost. That means you will not receive the images from your Google Doc via email. In that case you have to save the images, by yourself, from the original document and put them in the folder you created. I suggest this documentation to check how to better work with files that are in Markdown format.
Step 4: Push the converted file to GitHub
At this point you have the files converted into Markdown and a folder in which you have images that will be displayed in each Markdown file. What you have to do now is to go to the GitHub repository and push the files into Markdown, together with the folder, containing all your images.
You can upload the files directly from the GitHub website or work in your local repository. When that’s done, push the changes to the online repository.
Step 5: Embed the GitHub page into your blog
Now that you have updated your GitHub page with the files in Markdown, they will be rendered like this . What you want to do now is to push directly to WordPress (see e.g. this post). It can work, but while the sentences and links are copied from GitHub, images are not. I have made several attempts, but I have not been able to understand why it does not copy the images.
The best solution now is to embed the chapters into a sort of Wiki on our blog with the links of the various chapters that are on GitHub. That’s what I did here.
There may be a better, more elegant solution, so that the course itself is pushed from GitHub to WordPress, but I have not been able to discover it yet. If you know of an elegant solution, please let us know via the comments on this blog.
If you follow these steps you will be able to convert your own course into an online one just with Google Docs and a relatively simple workflow. You can embed your GitHub page into your blog, which could make your course available on your personal website. This can help you during these times in making your courses more easily available, and, in the long run, provide greater visibility to your work.
This post was written by Alessandro Sparacio & Hans IJzerman
Lisa est Professeur à l’Institut de Neuroscience et de Psychologie de l’Université de Glasgow. Ses recherches empiriques se concentrent sur les liens de parenté et sur la façon dont la perception sociale de la morphologie affecte le comportement social. Plus précisément, elle s’intéresse à la façon dont les humains utilisent la ressemblance faciale pour déclarer qui sont leurs proches et à la façon dont ils réagissent aux indices de lien de parenté dans différentes circonstances.