CORE Lab 2020 Lab Philosophy/Workflow Hackathon

Today, September 16, 2020, we organized the yearly lab philosophy/workflow hackathon of the CORE Lab. To get all the lab members on one page and to reduce error as much as possible, we have a lab philosophy that is accompanied by various documents to facilitate our workflow. But research standards evolve and we also often notice that the way we conduct our research is not as optimal as we would like it to be. As new tools seem to be created daily, we can also not keep up with new developments without hurting our research. And it is important that those who do research daily provide input in the procedures they work with. That’s why we decided on a meeting once a year, where all lab members can have their say in how we are going to work that academic year. The new product is available here.

This year, the procedure was the following:
  • All lab members (Patrick Forscher, Bastien Paris, Adeyemi Adetula, Alessandro Sparacio, and Olivier Dujols) submitted to Hans in a direct message via our Slack what they like, what they don’t like, what they want to add, and what we should do better (in relation to what is already in the lab philosophy).
  • Hans gathered all the information, organized them by category (commented on some), and then posted them in a public channel in Slack again.
  • All lab members got a chance to discuss each other’s suggestions via Slack. 
  • Hans created a Google Doc and organized the information by category. 
  • This morning and after lunch, we discussed points that we were still not entirely in agreement on or that we thought were important enough to discuss in more detail (e.g., should we write a section on COVID-19? Should we include standard R Snippets on our OSF page? Do we think our Research Milestones Sheet is important, and, if it is, how do we ensure frequent updates? Do we want to keep our account? Do we want to start integrating GitHub for version control of our data and analysis scripts? Do we want to participate in the paper-a-day challenge or do we want to encourage regular reading in a different way? For all of these: how do we ensure adoption as well as possible?). We also went through a list of new tools that we had found via various sources (usually via Twitter, thanks Dan Quintana), and decided which ones we wanted to adopt.
  • Then lab members claimed tasks they wanted to complete: writing new sections, getting rid of obsolete ones, or adding information to our workspace. All lab members also reviewed our lab canon.
  • To get a sense of community that we usually miss, we ordered food and ate “together” via Skype. 
  • Once the document was finished, Patrick and Hans reviewed the changes and made final decisions.
Here are some things we really liked (this is a non-exhaustive list):
  • Our code review section. 
  • Our section on work-life balance.
  • Our clear communication of lab values. 
  • Our test week data. 
  • Our daily stand up sessions.
Here are some of our major changes (again, a non-exhaustive list):
  • We expanded our onboarding section. 
  • We changed our Research Milestones Sheets (RMS) and added some tools. For example, by default, we will now use Statcheck to check p-values in our articles and use Connected Papers to help with our literature reviews. We also created a new way to ensure regular updating by integrating the RMS into the weekly meeting agenda of the PhD students (and emphasized that people can earn rewards four times a year if other lab members do not earn theirs). 
  • We added a section on intellectual humility, encouraging lab members to write Constraints on Generality and to rely on evidence frameworks to not go beyond their data.
  • We added a section on COVID-19.
  • We already had a journal club and a weekly writing block but did not describe it in the document. This has now been added. 
  • We lagged in regularly releasing releases on our website. We will do monthly releases of our products (e.g., articles, interviews, blog posts), and presentations and appointed lab members who are responsible for checking for updates. Bastien Paris will ask lab members for their updates and Bastien and Olivier Dujols will post the update. Hans will write blog posts to summarize what we release. 
  • We removed information on running studies in Grenoble and moved that to a private component on the OSF. 
  • For the coming year, we will try to integrate Zotero in our workflow. Patrick Forscher will give a brief workshop on how to create and share libraries. Each member of the team will then generate their own libraries, after which we will use that to create various joint libraries. 
  • In our new projects and wherever possible, we will start with collaboration agreements that we borrow from the Psychological Science Accelerator
  • We decided to add a section on the TOP Factor. In the past, we had a section on favoring open access, but the new TOP Factor allows us to articulate our values more clearly (thanks, COS). Flexibility for non-PIs still exists to submit to journals with high JIF, but we felt that articulating our preference for journals with high TOP Factor communicates our values and hopefully changes matters in the long run. 
  • We got rid of our “sprint planning”. While it sounded like a great idea to specify medium-term goals, in practice they did not work as well and did not add much to the daily standups and weekly one-on-one meetings. 
  • We decided against the paper-a-day challenge, but instead we created a Google Sheet where we all report on one article we read per month (starting with those from the lab canon). 
  • We re-emphasized the importance of using our presentation template and appointed Alessandro Sparacio to ask lab members to send him their presentation to post on our lab GitHub.

The coming two weeks we will also update our Research Milestones Sheet for our existing projects and use Tenzing to clearly specify contributor roles in our existing projects. We still struggle with quite a lot. How for example, can we better discuss our longer term goals outside of journal club and our daily standups? (we feel we miss face-to-face contact where we can discuss outside of work). 

We are ready for the new year!

But now…time for some relaxation.

The CO-RE Lab opens its doors

Once a month, the CO-RE lab organizes a journal club. Before each journal club, all journal club members have the option to propose one or two articles for the group to read in advance. The articles may be about any topic related to the CO-RE lab’s shared interests of interpersonal relationships, meta-science, and research methods (see our lab philosophy for more details), but any science-related article that will generate a good discussion is permitted.

During the journal club itself, the journal club member who proposed the article gives a short summary and asks some discussion questions. The journal club’s discussions are conducted in English and moderated by CO-RE lab member Patrick S. Forscher. 

Because we have moved our journal club online, we have the possibility to invite two underresourced researchers to be part of our journal club for the 2020/2021 academic year. To see the articles we have discussed in the past, you can see our Journal Club Overview Sheet. As we understand that Internet access may be a challenge for underresourced researchers, we will support the two selected researchers with 100 Euros to pay for Internet access (half paid up front, half paid at the end).

There are some conditions to being part of our journal club and receiving the 100 Euros:

  • You have no grants of your own. 
  • You need the support to be part of international networks.

When selecting the two researchers, we will give priority to those working in countries with a lower GDP per capita (according to the Worldbank’s data). 

To apply to be part of the CO-RE Lab journal club and receive the funds for Internet access, we ask you to fill in this form. In the form, we will ask the following information:

  • One sentence why you want to be part of our journal club.
  • A commitment to being part of the journal club until the end of the next academic year (2020/2021 by the European Academic calendar).
  • An article that you would like to discuss during journal club (we want to know your interests!). We are very open to discussing articles outside the North American/European mainstream (see for example this article by Ojiji we discussed). 
  • A cv that you can upload, so that we know your background.

If you participate in our journal club, we will grant you affiliate membership of the CO-RE lab, we will make ourselves available for advice (if you’d need it), and you get access to our Slack. If you have any questions regarding this opportunity, please contact Patrick Forscher ( or Hans Rocha IJzerman ( The offer is open to researchers at all levels (from undergraduate to professor).

Yearly lab philosophy update

To try to get everyone on one page in our lab, upon my arrival in Grenoble I wrote a “lab philosophy1Maybe “written” is too big a word. The first draft was heavily inspired by Davide Crepaldi’s lab guide, which was in turn inspired by Jonathan Peele’s lab guide.. This lab philosophy is complemented by an OSF workspace that includes some useful R code, shared data (hidden from public view), the CRediT taxonomy to identify contributorship within our own lab, and a study protocol for social thermoregulation. When you want to do (open) science as well as possible, it is important to have some kind of shared understanding. That means, for example, creating templates so that master students, PhD students, and postdocs know what I have in mind when I want to do research. We have created, for example, a template for exploratory and one for confirmatory research.2We know that this dichotomy is overly simplistic, but it helps at least for students to structure their research. We have also started using these templates in our teaching (for example to help master students understand how to structure their project on the Open Science Framework). Creating the lab philosophy also allowed me to outline my approach to research and what students can expect from me.

However, it does not only allow me to communicate what I expect. It also allows the people that I work with to correct the process or get rid of tasks that may seem overly burdensome, not very useful, and too bureaucratic.3After all, I heard that the French don’t like bureaucracies….. Every year in September we get together to revise our lab philosophy, so the current draft is more a collaborative document than my lab philosophy. The goal of this post is therefore mostly to document that process. Here’s what we do:

  • Each member identifies around three things in the lab philosophy that they do not find useful/outdated.
  • Each member identifies around three things in the lab philosophy that they find very useful and absolutely want to keep.
  • Each member identifies around three things that they would like to add to the lab philosophy.

Following this process, the PI (me) collates all the information and puts it up to a vote amongst all the lab members. We then get together4“Together” this year will probably be a Skype call. (usually with good coffee) to update our lab philosophy and to commit to our new way of working for the coming year. This will also means reminding ourselves of things that we didn’t do sufficiently yet5I was rereading last year’s update and see that we did not adopt collaboration agreements, for example, and we need to do a better job updating our Research Milestones Sheet. and things that we think are going really well.  This year, we will also write a post to write what we have done during our update.

Perhaps you have a lab philosophy yourself. Or, you have comments/critiques/compliments on our lab philosophy. Post your comments and the links to your lab philosophies here, so that we can collect them, so that we can read them, so that maybe we can steal some of your practices, and so that other people who read this post can find lab philosophies other than ours.

This blog post was written by Hans IJzerman.

How open science can advance African psychology: Lessons from the inside

Image credit Martin Sanchez

Psychological science should be a truly global discipline and psychologists should be poised to understand human behavior in any kind of context, whether it is urban or rural, developed or underdeveloped, WEIRD (Western, Educated, Industrialized, Rich, and Democratic) or nonWEIRD. To arrive there, we need to ensure that 1) researchers from those different contexts are included, but we also need to ensure that 2) researchers from those contexts adhere to the highest-standards in scientific research. Do we, as African researchers, do enough for the credibility and acceptability of African psychology? 

To answer this question, we will first analyze an article by the late Ochinya Ojiji – a well-known Nigerian scholar. We then argue how the adoption of open science initiatives can start to answer Ojiji’s call for greater rigor amongst Nigerian and, more broadly, African researchers. Such greater rigor, in the end, can help ensure we will have an equal and quite relevant voice in psychological science. This voice by Nigerian and other African researchers is essential for the development of more mature psychological theories that are more generalizable across various contexts.

The state of Nigerian psychology according to Ochinya Ojiji

In 2015, Ochinya Ojiji, a Nigerian social psychologist who worked for 28 years in Nigeria at the Universities of Jos and Uyo and at the Institute of Peace and Conflict Resolution in Abuja, reflected on how psychology as a science had fared in Nigeria in the past five decades. In the very beginning, Nigerian researchers were taught by Western psychologists. Though only very  few psychology departments had been established during this era, Nigerian psychology was taught as a proper scientific field (and recognised as such) with the necessary facilities to conduct rigorous scientific research (like labs to conduct experimental research). The considerable Western influence meant that early Nigerian psychologists maintained relatively high standards in conducting their research. In the two decades that followed, Western influence decreased. Ojiji remarked that the decline of Western influence meant a rise in various unwholesome practices and activities led by homegrown psychologists. These practices include the proliferation of substandard and profit-oriented local journals, little quality control of curricula at universities, and the under par teaching of senior academics via “shuttling” between neighboring universities, which ultimately affects the quality of training of these students.

Ojiji mostly presents these as observations and no data is reported on the frequency of the various occurrences. It is thus unknown to what extent these practices influenced Nigerian psychology. Despite these shortcomings, we feel comfortable in relying on Ojiji’s experience as a one time Editor-in-Chief for one of the most important journals in Nigeria, as an external examiner to a number of universities, and as someone who has taught in most parts of Nigeria to observe and characterize Nigerian psychology as a “folktale psychology”. To counter the issues he observed, Ojiji called for the NPA, the Nigerian regulating body for psychology (that is, unfortunately, not backed by formal legislation) to improve quality-control standards. Yet at present, from Ojiji’s writings it is unclear how we can implement his suggestions to improve the quality of Nigerian (and perhaps African) psychology.

It should be clear that the unguided practice of psychology as a science in Nigeria has put us off track and is thus hurting the quality of our science. To ensure we again improve the quality of our science and to develop a higher standard indigenous psychology, we can again look to our colleagues from North America and Europe and the recently emerging “open science” movement to help inspire some much needed reform in African psychological science.

Open science: an opportunity for African psychological science 

Although African countries and African research are quite heterogeneous in respect to their educational structure and local realities, institutions in individual African countries share some common challenges in African psychology such as lack of international recognition, lack of funding, limited resources and facilities, limited legislative backing and so forth. However, it is important to know that some of the problems Ojiji pointed to have one thing in common: there seems to be a lack of verifiability and responsibility within Nigerian psychological science. This was a problem in North American and European psychology as well and they are starting to fix that problem.

One of the central tenets in the open-science movement is to increase verifiability and responsibility. The UK Royal Society’s motto illustrates this well: nullius in verba (take no one’s word for it). The open-science movement is quickly growing in Europe and North America and this movement presents unprecedented opportunities for Nigerian and African researchers.

Specifically, researchers in the open-science movement make available research articles for free on preprint servers, they share their data and research scripts, while they also create helpful resources to learn how to improve one’s research. What is also interesting to help improve the quality of science is the emergence of Registered Reports, where researchers can submit the method of an article before data collection (and where the report is published without paying attention to significance levels).

We believe that adopting open-science practices can answer some of Ojiji’s concerns and can vastly improve Nigerian, as well as African, psychological science. Participating in the open-science movement arguably presents the biggest potential to level the playing field between North American/European and African psychology. It also offers African researchers a global platform to practice credible science and to help shift the perception in at least some African countries that psychology is not a science. But what are some ways to start practicing open science?

How can African researchers engage in open science

There are various initiatives that African researchers can turn to that we have outlined before and we will provide a (non-exhaustive) list here:

Open access
  • Preprint servers. There are now various preprint servers available (like AfricArXiv, PsyArXiv), which allow researchers across the world to freely access scientific research. In our lab, the CORE Lab, we submit a preprint to such a server upon article submission (see our lab philosophy here). Preprint servers allow researchers to share their newest work, without a long turnaround from a journal. 
  • Sci-Hub. This website collects the login information from various universities to allow access to many articles that would otherwise be behind a paywall. Sci-Hub is not legal in most countries, so we would never dare to recommend its use, especially in African institutions that cannot always muster the high fees to pay for journal subscriptions.
(Free) tools
  • The Open Science Framework (OSF). The OSF is free and allows researchers to share their data, materials, and analysis scripts. It also allows researchers to “pre-register” their hypotheses. It further allows for easy collaboration between collaborators across the world (researchers also make available templates for others to build on, like our lab does here). 
  • R and RStudio R is a programming language package used for statistical computation and analysis. It is useful for writing your analysis scripts. It has many advantages over a software like SPSS, as it is free and the way it works allows for much better verification. R studio is a supporting package that greatly facilitates the usage of R. Note that R is not easy, but there are some excellent resources now online to learn it (like this course by Lisa DeBruine, which was translated in French and available here). 
  • GitHub is a free service that supports transparent and verifiable research practices. It allows you to publicly archive research materials, allows for much easier collaboration between researchers, and, importantly, permits good version control. 
  • Code Ocean is a research collaboration platform that supports research from the beginning to when these studies are published. Code Ocean provides you with the necessary technology and tools for cloud computing and built-in best practices for reproducible studies. For example, if you run analyses on a different platform or with a newer R package it is not impossible results vary. Codeocean allows you to directly reproduce the results as planned.
Global networks
  • The Collaborative Replication and Education Project (CREP) is a crowdsourced initiative that is focused on conducting replications by students. The CREP is a pretty unique learning opportunity for people interested in open science, as it has established templates and extensive quality control from researchers around the world. They would be very happy to support African researchers.  
  • Psychological Science Accelerator (PSA) is a network of over 500 labs from over 70 countries across 6 continents conducting research studies across the globe. With an emphasis on open science practices and different research roles such as test instruments translation, data collection et cetera, the PSA currently presents arguably the most accessible opportunity for African researchers to participate in international studies such as the ongoing COVID-19 rapid studies with some African collaborators (we wrote about the PSA before). Note that current participation from Africa is modest, so this is where there is a real opportunity for African researchers. 
  • ReproducibiliTea. With over 90 institutions spread across 25 countries, this grassroots journal club initiative provides a unique and supportive community of members to help young researchers improve their skills and knowledge in reproducibility, open science, and research quality and practices. You can organise your own version of it.

There are also tons of other initiatives that we have not mentioned yet, like the Two Psychologists, Four Beers podcasts, or the Everything Hertz podcast. There are thus tons of free opportunities to learn. 

How North American and European researchers can support African scientists

But without structural changes to the way science works, open science will not yet level the playing field as access for African scholars is still difficult. Some structural changes in the way that psychological science currently operates will also be necessary to support African researchers to become part of the process. We again provide a (non-exhaustive) list here:

  • Representation in formulating and implementing science policy. At present, there is only one person in an initiative like the PSA on the African continent. For global network initiatives such as the PSA, the representation of African researchers must be engraved in their policies. African researchers should also become intimately involved in implementing these policies (and thus not only be involved in data collection for the PSA). This can start with African representation in the board and in various committees. 
  • The waiving of fees and and communicating that in associations’ policy. Some organizations, such as the Psychological Science Accelerator,  allow reduction or complete waivers of obligatory fees. Other organizations (like SIPS, SPSP, APS) should strongly consider offering free memberships to African researchers and make this explicit, as these fees may discourage African researchers that cannot afford these fees.  
  • Recognition of the realities of third-world countries. When doing crowdsourced research or writing research collaboration documents, one must also factor in the realities of third world countries. African researchers are systematically disadvantaged in these endeavors due to their lack of access to the same level of infrastructure. For example, many African researchers do not have sufficient internet to co-write a manuscript on a fast schedule. Collaborating on constructing materials can also be a challenge. Providing African researchers with a few hundred Euros per year to pay for things like reliable internet can considerably reduce this systematic disadvantage.
  • The training of research collaborators. Due to lower levels of science infrastructure, African researchers do not have the same training opportunities as researchers in other regions. This manifests itself in their lower levels of experience with initiatives such as open science. Providing access to training materials, preferably in indiginous African languages, can go a long way toward reducing or eliminating this training gap. 
  • Dissemination of research. AfricArxiv already exists. This is an excellent initiative and continued support for this preprint server would mean a lot for African scientists. In the same vein, paid journals in psychology should allocate a number of free open access articles for African researchers per year. . 
  • Funding. IRBs in Africa are often not cheap: they require a fee to go through, which is often prohibitive for conducting research. Providing research grants for  Institutional review boards (IRB) fee, for data collection expenses, et cetera will go a long way in facilitating the research process and overall success. 
  • Facilitation of research visits to universities. By inviting African researchers to your institute, they can benefit from the facilities at your university and they can become an equal partner in your research process. 
  • Journal audits. Journals should examine how many submissions they receive from Africa and how many articles are accepted, and, if the numbers are low, implement policy to counter that.

African and Nigerian psychology should become a normal part of the research process, if we are to understand humans the world over. Researchers in psychological science have pointed to the need for generalizability and for that to happen, we need to be there. However, in order to get there, Nigerian and African psychologists need to raise their standards and North American and European researchers can support us in achieving our goals. Now is the time to become a vital part of psychological science, as open science presents us with unprecedented opportunities.  

There are additional open science resources available that we have missed. Please add them in the comments or shoot me, Ade, an email ( We will update this blog and credit you for your contributions. Additions will be especially helpful as we will translate this blog to various languages. 

This blog post was written by Adeyemi Adetula, Soufian Azouaghe, Dana Basnight-Brown, Patrick Forscher, and Hans IJzerman

Dealing with the COVID-19 pandemic: can self-administered mindfulness help against the stress from lockdown?

The featured image is licensed under a CC BY-SA by Alessandro Sparacio

The coronavirus disease 2019 (COVID-19) outbreak had a massive impact on our lives. The lockdown obliged us to an abrupt change of habits by bringing severe limitations of personal freedoms. The measures taken against COVID-19, such as the lockdown, may well affect people’s mental health. A general population survey in the United Kingdom (with over a thousand people) revealed widespread concerns about the effects of the current situation on their levels of anxiety, depression, and stress (Holmes et al., 2020; Ipsos MORI, 2020).

If the lockdown affects you in a way that puts you at risk for developing mental health issues such as anxiety, depression, or excessive stress, you would probably want to find a strategy to regulate those states. One way to do so that seems especially suited for the current situation, as it does not require large spaces and can be practiced comfortably at home is self-administered mindfulness. Self-administered mindfulness is a type of meditation that consists in increasing the attention to and the awareness of the present moment, with a non-judgmental attitude (Brown, & Ryan 2003). Most awareness exercises like self-administered mindfulness are based on the same idea: each time the mind wanders, the attention is gently brought back to one’s breath or bodily sensations. 

Usually, mindfulness interventions are parts of large programs (which can last 8 weeks) requiring the presence of a qualified instructor. However, self-administered protocols can be engaged in via self-help books, smartphone apps, computer programmes; you don’t always need to be with others to learn and practice mindfulness. These interventions share features such as a non-judgmental attitude and an acceptance of inner experience with other mindfulness protocols. In contrast with other protocols, however, self-administered mindfulness does not require the presence of an instructor, is available 24/7 and is less costly (Spijkerman, Pots, & Bohlmeijer, 2016). Some studies suggest that self-administered mindfulness improved symptoms of perseverative thinking, stress, and depression for a group of students (compared to a passive control group; Cavanagh et al. 2018). 

Despite the study by Cavanagh et al. (2018) yielding a positive result, it is still uncertain whether there is actually evidence supporting self-administered mindfulness. It is no secret that the world of many sciences, including psychology, is affected by some “viruses” that infect the quality of our science in many ways: publication bias (the likelihood that positive results have a higher probability of getting published) and questionable research practices (which is generally used as a term to describe various techniques to obtain significant results that may not actually represent valid evidence). And this can be consequential. Fanelli (2010), for example, estimated that psychology’s and psychiatry’s published findings contain over 90% positive results, a statistical impossibility as the literature is not sufficiently powered to detect findings at that rate. This means that the psychological literature is very likely to contain unreliable findings and that findings that are stable are likely overestimated in their “effect sizes” (a statistical concept that reflects the magnitude of the phenomenon of interest). 

The relative penetration of these viruses into the self-administered mindfulness domain is uncertain. We are far from saying that self-administered awareness interventions are useless, but some caution is needed before we can say that self-administered awareness has demonstrated efficacy in reducing people’s stress levels. A systematic review of literature or a statistical summary of the findings (i.e. a meta-analysis) are necessary to assess the extent to which this strategy is affected by publication bias and to provide an estimate of the true effect behind this type of meditation. We have reasons to believe that existing meta-analyses (e.g., Khoury et al. 2013) did not do this correctly (see e.g., this post by Uri Simonsohn if you are interested in why)1Alternatively, a Registered Report (a type of research protocol in which the manuscript is pre-registered and receives peer-review before data collection) could help identify the efficacy of self-administered mindfulness. A Registered Report is the best “vaccine” against the research problems we outlined before for a twofold reason: 1) The editor commits to publish even non-significant findings (discouraging the use of QRPs) 2) If also negative results are published, it is possible to compute a reliable estimate of the effect size of interest. Even in this case, a registered report that investigates the efficacy of mindfulness in regulating level of stress is missing..

That means that, at the moment, the answer to the title is: we don’t know. As an intervention, self-administered mindfulness currently has a low “Evidence Readiness Level”: we have no way of knowing whether self-administered mindfulness is a reliable intervention against any stress, depression, or anxiety that people may be experiencing as a result of the current situation. And even if we detect that self-administered mindfulness has worked in the populations that were tested, it is also pretty likely that we don’t know how this works across the world: the intervention has not yet been tested across many different populations (and I know of no research that tests it during a global pandemic). 

To investigate this conundrum, part of my PhD project aims to shed light on the potential use of self-administered mindfulness for stress regulation and the affective consequences of stress. We will employ a stringent analysis workflow (including multilevel regression-based models and permutation-based selection models) to test for publication bias in various ways. In other words, I will be able to let you know shortly whether self-administered mindfulness has the benefits for emotion regulation that it currently claims to be having. Although I would have loved to have given you an answer with greater certainty, it is simply too early to tell. As a scientist, I would be remiss to say otherwise. 

This post was written by Alessandro Sparacio & Hans IJzerman

Temperature responsiveness during Hold Me Tight weekends: A new chapter for EFT?

Starting in October 2019, I – Olivier – have gone to the Netherlands twice to record the peripheral temperature of partners in couple therapy. In a previous blog post, I explained the basic dynamics of romantic relationships and how couples can enhance their feelings of connection and safety through Emotionally Focused Therapy (EFT). In this blog post, I will discuss how and why we investigate temperature responsiveness during so-called Hold Me Tight weekends to further enhance connection and safety in relationships.

What is Hold Me Tight?

Let’s first explain the concept of Hold Me Tight (HMT). HMT are short versions, either online or in-person, of the EFT protocol (which I described in the previous post). After HMT, couples typically proceed into a longer and somewhat more formal protocol of EFT. In-person HMT is a program that lasts 3 days (over a weekend), and usually involves about 10 couples who are supported by therapists trained in EFT. The HMT weekend is standardized so that the program is similar across couples and across time. The HMT program starts with an introduction aimed at understanding love and attachment as it is understood in research. Then couples go through the following seven chapters:

  1. Recognizing Demon Dialogues: During this conversation, partners identify the negative cycle they enter when arguing (e.g., one is blaming and the other is emotionally closed). Identifying the root of the problem is the first step that will help them figure out what each other is really trying to say. 
  2. Finding the Raw Spots: Next, partners learn to look beyond their immediate impulsive reactions. They discuss and exchange about their negative thoughts (e.g., When you say that, I think you are going to leave me”) emerging during their negative cycle. 
  3. Revisiting a Rocky Moment: This conversation is about defusing conflict, and building emotional security. Partners analyze a specific conflicting situation using what they learn during the previous two conversations.
  4. The Hold Me Tight Conversation: During this conversation partners practice how to be more responsive to each other. They learn how to be more emotionally accessible, more emotionally responsive, and more deeply engaged with each other. They talk about their attachment fears and try to name their needs, in a simple, concrete and brief manner. This is usually considered the main conversation of the weekend. 
  5. Forgiving Injury and Trusting Again: In order to offer forgiveness to each other, partners are then guided to integrate their injuries into the couple’s conversations. To do so, partners reminisce and discuss moments when they felt hurt.
  6. Bonding Through Sex and Touch: Partners discover that emotional connection creates great sex, and that, in turn, a more fulfilled sexual life increases their emotional connection to each other. They discuss what makes them want to have sex, or not, and if they feel secure having sex, or not.
  7. Keeping Your Love Alive: In this last discussion partners are looking into the future.  After understanding that love is an ongoing process of losing and finding emotional connection, couples are asked to plan rituals in everyday life to deal with their negative cycle. They summarize what they did during the weekend, talk about their feelings, and discuss how they will implement in their daily lives what they have learned.

These chapters focus on how couples can consciously recognize their attachment dynamics. But what if there is more about attachment?

Recognizing our inner penguin dialogues: why temperature may matter for partner responsiveness 

In the previous post, we talked a bit about the research John Gottman and his colleagues did on “coregulation”. We have taken a keen interest in this concept, but we depart from a radically different assumption: we try to understand if and how partners’ temperature regulation influences their feeling of safety. To investigate this, I regularly travel to the Netherlands to visit Berry Aarnoudse and Jos van der Loo who organize HMT weekends. During those weekends, I record both partners’ peripheral temperatures.

The immediate objective of this ongoing research is to link peripheral temperature recordings to the participants’ answers to psychological questions asking about their emotions, their feelings of safety, and their perception of the dynamics in their relationships. We suspect the partner’s individual responses to the questionnaire to be related to “signature” variations in peripheral temperature at the couple level.

The procedure for this study is as follows. Before I go to the Netherlands, I typically ask couples registered for the HMT if they are willing to participate in a study investigating how attachment dynamics in couples relate to temperature fluctuations. I kindly ask interested couples to fill in (independently to each other) an online questionnaire that assess relevant psychological and emotional variables, such as: responsiveness to the partner, feeling of security in relationships (in relation to attachment theory), and willingness to be close to their partner. The answers are of course anonymous; part of this is emphasizing that the other partner will never have access to their partner’s response (which can be a challenge during the time of open science!). 

One of our lab members wearing the ISP131001 sensor with the app on the right.

To record peripheral temperature, we rely on a sensor (ISP131001) former members of the lab have validated in earlier work (Sarda et al., 2020). This wireless small sensor is placed on the tip of people’s fingers and linked via bluetooth to a smartphone application developed by our lab that records and store data on our server1This mobile application is being developed by CO-RE Lab and the code is open source. You can find it on our GitHub repository here. Don’t mind using it for your own project; please keep us up-to-date if you develop a new version.. The device is very light, which allows wearers to carry out their everyday activities almost just as normal (see above)2Since people wear the device all day long, we provide disposable gloves so that people can go to the bathroom while wearing the device. Also, while our device seems to work pretty well for temperature measurement, the design is not really user friendly. So, feedback from users, and from partners during the HMT weekends is something we incorporate. For example, during previous HMT measurements, people’s feedback allowed us to improve comfort while avoiding as much as possible the breakage of our (very fragile) material..  At the start of the weekend, I give each partner a smartphone (to be kept in their pocket) and I put one temperature sensor on each person’s finger. At the end of the day, I remove the sensor from people’s fingers. So, throughout the entire program we record people’s temperatures. The data of one couple looks like this:

The aim of this project is to understand thermoregulation mechanisms in order to help therapists and couples to improve receptivity between partners. But we cannot do that having “only” data from couples in therapy. This is why, simultaneously, we are recruiting couples in the general population of Grenoble in France. We know from previous studies conducted by our lab that these couples tend to declare being very satisfied with their relationship. We don’t really know why, but having data on couples from HMT and from the general population will help us identify cues to develop interventions for couples that report having lower than average relationship quality. Having this variety will allow us to understand – via deep learning – how peripheral temperature variations between partners are related to partners’ scores to psychological variables (their answers to the anonymous questionnaire). Because we are focused on helping therapists and couples, we intend to develop in the future an algorithm that will manipulate peripheral temperature which we hope will improve partners’ responsiveness. In the end, can we add another chapter to the Hold Me Tight weekend? Our data will tell.


Just before the beginning of the lockdown in France, we decided to stop collecting data in order to protect our participants’ health. But social isolation or being confined together makes the research even more relevant. The lockdown was recently lifted in France, but working from home remains the norm. Couples thus spend more time together at home than ever. Because we believe that the results of this study could help us to understand and improve intimate partner relationships, we are planning (adhering to the COVID-19 prevention measures in place) to resume the study data collection. For every 60 couples that participate, we raffle off some awards (e.g., an iPad). As we now send the sensors via postal service, people all across the European Union can participate. If you are interested in participating in the study, please contact us at If you are a therapist and want to help measure temperature during HMT weekends, please shoot us an email as well.

This blog post was written by Olivier Dujols and Hans IJzerman.

Engaging with EFT as a Social Psychologist

I – Olivier – am a PhD student. My research is in social psychology. However, the end goal of my thesis is to improve how responsive couples are towards each other after they go through relationship therapy. Diving into relationship therapy is a big step for a research-focused social psychologist. To try to improve partner responsiveness, I try to identify the psychological and physiological mechanisms that constitute partner responsiveness. This is part of a series of two blog posts in which I explain my research. In this first blog post, I explain the basic dynamics of romantic relationships from the perspective of EFT and how therapists currently help improve them. In the next blog post, I will discuss how and why we investigate temperature responsiveness to reach our goal.

Attachment dynamics in romantic relationships

People’s attachment orientations are important for how people engage and maintain their  relationships. In early life, people “regulate” their relationships by screaming, crying, hugging (Bowlby, 1969/82). Such attachment behaviors are ways to increase closeness to the caregiver and can help the infant signal threats (such as cold, any type of risks, or starvation) from which it seeks protection. When the caregiver provides that protection, it serves as a secure base from which the infant can explore its environment. Mary Ainsworth built on Bowlby’s (1969) work by identifying that infants’ attachment style may differ from each other, developing a method (the Strange Situation) that helps psychologists identify how the infant is attached. When they were first discovered, “attachment styles” were divided into three categories: A (Avoidant), B (Secure), C (insecure/ambivalent). Another category: D (disorganized) was later added to these three (Main & Solomon, 1986).

These attachment styles transfer, at least to some extent, from relationships with parents to relationships with romantic partners (Fraley, 2019). While for children, caregivers are the main source of security, this is often a romantic partner for adults. In the social psychological literature, we usually measure people’s attachments by asking them about their romantic relationships. The Experiences in Close Relationships (ECR) scale is currently the best validated measure of attachment in adulthood (Fraley, Heffernan, Vicary, & Brumbaugh, 2011) and relies on statements like “I prefer not to show my partner how I feel deep down”, and “I often worry that my partner doesn’t really care for me”. People indicate how well each statement applies to them on a scale ranging from 1 – strongly disagree to 7 – strongly agree.

People’s attachment in their romantic relationships is scored on two continuums: from anxious to secure and from avoidant to secure. If you score low on both, you are pretty secure in your romantic relationships (if you want to test how secure you are, you can do so by going here). A person with a high score on anxiety will more frequently try to seek closeness with their partner, but will also often feel like they can lose them at any time. In contrast, a person with a high score on avoidance will less frequently try to seek closeness with their partner, and will prefer not to rely on their partner in stressful or threatening situations. People who are more avoidant are more likely to distance themselves from potential threats and disengage from their emotional reactions. In contrast, people who are more anxious tend to focus on stressful situations, which exacerbates their stress, increases their negative moods, and anxious thoughts.

How attachment theory is connected to therapy: Emotionally Focused Therapy.

Such attachment dynamics can certainly play a role in adult romantic relationships. Humans are, after all, social animals; we need connection and safety throughout our entire life. But despite this necessity, we don’t always know how to connect in our relationship. This is why some couples seek therapy. Emotionally Focused Therapy (EFT) for couples relies on a brief protocol therapy developed by Sue Johnson (2004) that is based on principles from attachment theory combined with a humanistic, and systemic approach. EFT focuses on how people experience their love relationships, and on repairing adult attachment bonds (Johnson, 2004, 2013). Specifically, the goal of the therapy is to create positive cycles of interaction between partners, so that individuals are able to safely ask for and offer support to their partner. Knowing how to be responsive to one another in turn also facilitates the regulation of interpersonal emotions.

EFT is an empirically supported, 8 to 20-session therapy (Wiebe & Johnson, 2016). A meta-analysis shows some empirical support for EFT couples therapy (Johnson et al., 2006). Studies for example have shown that the EFT protocol can be effective for stress management in couples (Lebow et al., 2012) and for increasing couple satisfaction (Denton et al. 2000).1EFT therapy is evidence-based and we like the idea of EFT. We believe in the principles of the theory and the therapy. But that the therapy is evidence-based does not mean we are not critical. The entire field of psychology has been faced with a replication crisis and that means that many of our most precious findings are uncertain, as has been evidenced by many studies (see e.g., Klein et al., 2018). However, the replication crisis in psychology has taught us that the results of empirical studies are not always replicable (that is found again when the conditions in which we found the first evidence are reproduced identically). This is partly due to small sample sizes in our studies and publication bias (which in turn is caused by an overrepresentation of positive – significant results) and a lack of pre-registration. It would be surprising to us if EFT has escaped the crisis, because doing high-quality research is incredibly hard! In our lab, we only consider research that has been done via “Registered Reports” to provide stronger evidence and we are not aware yet of any EFT research that has been conducted via the Registered Report route. Therefore, we think that studies that show positive effects from EFT should be replicated prior to being convincing, at least to us. Effectiveness findings should be replicated so as to provide stronger evidence (and ideally, they should follow these Evidence Readiness Levels. We are not sure yet whether the EFT work is at ERL3. We have not done a systematic review on EFT).

EFT follows three basic stages, in which the couple engage in conversations. During Stage 1, “De-escalation”, both partners mindfully observe their pattern of interaction during conversations. Sue Johnson calls the negative emotional cycle partners are caught in “the dance”. From an attachment point of view, they may discover that the negative cycle creates feelings of abandonment and rejection. During that phase, the purpose is to discover that these feelings are a common enemy and that they can help each other step out of it. During Stage 2, (“Restructuring the bound”) arguably  the most powerful conversations take place. These conversations are also called the “Hold Me Tight conversations”. Partners discover and share their attachment fears in ways that allow the other to offer reassurance and safety. Then partners express their need to create deeper emotional responsiveness. When they express that need, they are ready to move to Stage 3, which is the “Consolidation of treatment gains”. The couple there examines the changes they have made and how they have fixed the negative cycle. The therapist supports them in looking to the future, and helps them to reflect on how they achieved greater responsiveness. 

During the therapy and during each session the first step for an EFT therapist is to help the couple focusing on the present process by asking “what is happening right now?”. By letting the partners focus on the present, the therapist puts the emotions of both partners together so that they focus on their interaction. The second step for the therapist is to help deepen the emotions, by for example asking what happens when they see tears on their partner’s face. The focus on connection and on the partner’s need can help create a new interaction that is really based on the attachment dynamic. The next step is to have one partner express what happens (e.g., “I don’t trust him/her”) when faced with the other. This helps the couple process the new step in the interaction. The therapist can then help identify what the expression of this sentiment does to the other (e.g., by asking, “What did it feel like to tell him/her that?”). The lack of being able to express one’s primary emotions are often based on demand and withdraw dynamics in relationships. Acknowledging the fear of abandonment helps to break the negative cycle. At the end of each session the therapist points out how well the couple did this process. The therapist tries to repeat and recreate these dynamics during the session at various levels of intensity. 

If you are interested in EFT, you can find training tapes, research on EFT, and where to do the basic training on their website

Conclusion: patterns of responsiveness in mind, but also in body.

The EFT protocol helps us understand how important discussions are in relationships to create deeper emotional interactions. But saying to your partner that you fear abandonment is not the only part of partner responsiveness. As it may be, John Gotmann and his colleagues have spent several years on understanding how people are also physiologically tied to each other. He found that couples that regulate each other physiologically are more likely to stay together.2Note, again, that this is in the Era Pre-Replication Crisis (19 years EPRC), so we are unsure about the strength of the evidence.

Where Gottman often focused on heartbeat, we are focusing on temperature. This may not be so intuitive for humans. A pretty easy way to understand this is when thinking of penguins. Penguins, when they get cold, huddle with each other (see a timelapse video here) to drastically increase the temperature inside the circle. Even if it is -20°C or below in the environment, inside the huddeling circle the temperature can raise up to 37.5°C (Ancel et al., 2015).

We (the Co-Re Lab) think humans do the same things, although it is true that in modern times we often regulate temperature often without each other. And yet, questions that concern people’s desire to warm up with one another correlate reliably with whether people want to share their emotions with their partner (or not; Vergara et al., 2019). And we suspect such regulation of basic needs is at the core of our partner responsiveness. In a next blog post, we will tell you how we investigate this during relationship therapy. 

This blog post was written by Olivier Dujols and Hans IJzerman

Posting a course via GitHub to your own blog

With the Covid19 crisis our life and our habits have completely changed. At least for a foreseeable amount of time, it will not be feasible to attend courses in person. As a result, many people are starting to move their courses online. If you want to publish your courses online, there are several ways to do it. In this post, we will show you a pretty simple solution via GitHub.

Step 1: Create a GitHub repository

In order to have your manual hosted by GitHub, you need to do two things: 1) create a GitHub account and 2) create a repository (a space of “memory” where your files will be stored). If you are new to GitHub, you can find more detailed instructions on how to work with it here. To show my example from a course I (Alessandro) posted (a translation of an R Manual), you can view this repository and this post (warning, it is in French). Some more detailed instructions on how to work with GitHub can be found in this presentation I did in our social cognition group (“the axe”). You can download it here.

Step 2: Convert your Google Doc to a Markdown file

In order to be displayed by GitHub, your files should be in Markdown format (if you use any other format, the preview of your files will not be available). Let’s assume that you are working with a Google Doc that you want to convert it to Markdown format (I can recommend this, as there are some built-in solutions).

You will first do the conversion by following these instructions:

  • Open a Google Doc from your Google drive that you want to convert into a Markdown file.
  • Click on Tools → Script Editor (you can only do this if you have the rights of modifying the document).

In the window you will find this line of code:

Delete it, and copy and paste this code. Credit for this script goes to Mangini.

  • Save the new script.
  • Once you have saved the script, there will be a dropdown menu with the title “MyFunction” (as you can see in the image below).
  • From that dropdown, select the function “ConvertToMarkDown“.
  • Click the “Run” button (the first time you do that, you will need to give authorization)
  • The Google Doc has now converted into “.MD” (Markdown) format. It will be sent automatically to the email address associated with your Google account (with all the images from your Google Doc attached).

Step 3: Fixing the converted file

Conversion works well in some cases, in others it does not. For example, some tables are converted well, others a little less. So, check through your new file before you post it. In any case, if you have images, unfortunately you will have to add them by hand. To add images and fix the code there are some valuable tools. I used two: Atom, which is a text editor and Dillinger, which allows you to see the preview of the file in .MD format. To add images use the following line of code:

This way you will add in the Markdown file the image called image_0.png. It is good practice to organize images in a specific folder, in order not to leave them spread out in your GitHub repository. To refers to images in a specific folder, you can use the following line of code, that must be added in the Markdown file:

To unpack this: 

  • image_0.png refers to the name of the picture
  • 10” is the subfolder of the folder named “images”.

It’s good practice to call the subfolder with a numeric value corresponding to the numbering of the chapter (i.e., 1 for chapter one, 2 for chapter two, et cetera). Thus, with this line of code, you are displaying in the markdown file, the image_0.png, located in the folder “10”, which is located in the folder named “images”.  The example outlined above, will look like that in the GitHub repository.

This the line of code for the course I posted:

And this is what was displayed for that part of the course:

Sometimes during the conversion, some images are lost. That means you will not receive the images from your Google Doc via email. In that case you have to save the images, by yourself, from the original document and put them in the folder you created. I suggest this documentation to check how to better work with files that are in Markdown format.

Step 4: Push the converted file to GitHub

  • At this point you have the files converted into Markdown and a folder in which you have images that will be displayed in each Markdown file. What you have to do now is to go to the GitHub repository and push the files into Markdown, together with the folder, containing all your images. 
  • You can upload the files directly from the GitHub website or work in your local repository. When that’s done,  push the changes to the online repository.

Step 5: Embed the GitHub page into your blog 

  • Now that you have updated your GitHub page with the files in Markdown, they will be rendered like this . What you want to do now is to push directly to WordPress (see e.g. this post). It can work, but while the sentences and links are copied from GitHub, images are not. I have made several attempts, but I have not been able to understand why it does not copy the images. 
  • The best solution now is to embed the chapters into a sort of Wiki on our blog with the links of the various chapters that are on GitHub. That’s what I did here
  • There may be a better, more elegant solution, so that the course itself is pushed from GitHub to WordPress, but I have not been able to discover it yet. If you know of an elegant solution, please let us know via the comments on this blog. 

If you follow these steps you will be able to convert your own course into an online one just with Google Docs and a relatively simple workflow. You can embed your GitHub page into your blog, which could make your course available on your personal website. This can help you during these times in making your courses more easily available, and, in the long run, provide greater visibility to your work.

This post was written by Alessandro Sparacio & Hans IJzerman

Manuel RStudio

Vous trouverez ici les chapitres du manuel pour apprendre à utiliser R et RStudio. Le manuel est écrit par Lisa DeBruine et traduit en français par Fabrice Gabarrot, Brice Beffara-Bret, Mae Braud, Marie Delacre, Zoé Lackner, Ladislas Nalborczyk et Cédric Batailler.

Lisa DeBruine
Lisa DeBruine

Lisa est Professeur à l’Institut de Neuroscience et de Psychologie de l’Université de Glasgow. Ses recherches empiriques se concentrent sur les liens de parenté et sur la façon dont la perception sociale de la morphologie affecte le comportement social. Plus précisément, elle s’intéresse à la façon dont les humains utilisent la ressemblance faciale pour déclarer qui sont leurs proches et à la façon dont ils réagissent aux indices de lien de parenté dans différentes circonstances.

Why any researcher should start their career with a meta-analysis

If your ambition is to become a scientist and an expert in a specific research area one path is more efficient than many others. The one that we think will make you an expert quickest is the writing of a meta-analysis. This path is very different from one involving primary research, but it will allow you to answer many more questions that you could conceivably answer with a single experiment. We will provide three reasons why you should take the meta-analysis path. Yet if I still do not convince you, we hope the lessons one of us (Alessandro) is learning from his explorations in meta-analysis thus far will still be of use to you.

1) Meta analyses allow you to have a broad view of a phenomenon of interest

Have you ever tried to go to the top of a tower and look down? The view is much more complete from there; it allows you to have an overview that you wouldn’t have had from the ground. Doing an experimental study is kind of like looking from the ground: only the result of your own experiment will be visible to your eyes. Conducting a meta-analysis instead allows you to see other people’s experiments and approaches at once. Say for example that you are interested in studying how meditation can help reduce stress levels. If you conduct a randomized controlled trial you will only know about that specific treatment and only on one particular population of participants. By conducting a meta-analysis, instead, you hopefully get insight into whether 1) meditation is more effective on individuals with certain personality traits and 2) the effects of meditation can be extended to different populations, while you may also observe when 3) meditation is effective in reducing stress levels and when effects are null or small. 

image credit Timothy Chan

There is one observation from our own path that we can already share with you. As psychologists we should be interested in how different people respond to different manipulations. Does people’s anxiety in their attachments, for example, matter whether or not they benefit from mediation? Or is biofeedback more effective for younger or older people?  The fact of the matter is that psychologists often neglect to report detailed records of the populations they study. One of the recommendations that will surely make it into the meta-analysis that Alessandro is leading is that researchers need to keep detailed protocols (like we have recommended here). In that way,  meta-analysts can start using this information across many studies.

2) Meta analyses allow you to have information about the health of the literature of interest

A meta-analysis can teach important lessons even to those who have no intention of taking this path. It is not a secret that many sciences have been hit with a replication crisis, as many replication studies have failed to obtain the same results of original studies they sought to replicate (see, for psychology, Klein et al. 2018; Maxwell, Lau, & Howard, 2015; Open Science Collaboration, 2015). One likely reason for the replication crisis is publication bias (see e.g., Sutton, Duval, Tweedie, Abrams, & Jones, 2000). A primary goal of meta-analysis is thus to know how bad the problem actually is and how bad publication bias in that literature is. 

In some fields, such as medicine, knowing the real effectiveness of a drug directly impacts people’s lives. However, because of publication bias, assessing the risk-benefit ratio of particular types of drugs is not easy to estimate. As but one example, Turner et al. (2008)  analyzed the effects of 12 antidepressant agents on over 12 thousand people both in terms of the proportion of positive studies and the effect sizes associated with the use of this drug. According to the published literature 94% of the trials were positive. Yet after using techniques to account for publication bias, Turner et al. (2008) found out that the percentage dropped to 51% and that the effect size decreased to 32% of its original. 

Overestimating the effect of a drug has direct consequences on the choice of certain therapies which in turn impact on the health of a population (and we feel those consequences even more so now, in the midst of a health crisis). A meta-analytic approach can help us signal there is a problem in a literature due to publication bias. Some think that meta-analysis can provide a correction of the effect size by correcting for publication bias. The jury might still be out on this, as others say that even “meta-analysis is fucked”. Even if meta-analyses cannot provide accurate effect sizes, they can provide a snapshot of the health of a particular research field (e.g., by pointing to how many results are positive and what researchers record). Based on this report of health, solutions (like Registered Reports) can be recommended to researchers in that field. It may well be that if meta-analysts do not do their work and provide recommendations, meta-analyses remain fucked for a long time to come. 

3) Meta-analysis allows you to acquire skills important for your future career as a scientist 

This recommendation is primarily for the starting PhD student. Stephen King famously said: “If you don’t have time to read, you don’t have the time (or the tools) to write. Simple as that”, and there is nothing more true. Reading numerous articles is the key to very quickly becoming a more efficient and faster writer. When I (Alessandro) started my meta-analysis, I may have been shell-shocked by the sheer quantity of what I had to read. But not only did my vocabulary quickly improve, I also encountered many different writing styles. It allowed me to integrate expert writers’ writing styles into mine. What also helps as a beginning PhD student is that conducting a meta-analysis has taught me the importance of good reporting practices and the limitations of a single study. We think for example that the psychological literature vastly underreported important information. We will try to contribute to changes and  make protocols available for the researchers in our own literature and I will try my best not to repeat the same errors. 

Final considerations

We can certainly recommend walking the meta-analysis path. What we have learned so far is that scientists underreport and they need to create more detailed protocols to keep good records of their work. In addition – and we are stating the obvious here – meta-analyses confirm that publication bias is a considerable problem. Finally, the exercise of doing a meta-analysis is vital for any researcher: it improves one’s writing and the body of knowledge required for running solid experimental studies.

The path to become a better scientist is arduous. Conducting a careful meta-analysis is definitely one of the stages that could lead you to the top. We hope to have convinced you that if you start your research, a meta-analysis is a good path to walk on to ensure that you become a careful observer of the phenomena you study. 

This blog post was written by Alessandro Sparacio and Hans IJzerman.