CResearch and Education Infrastructure
C-2 What about new ways to release research findings and "open science"?

Setting Research(ers) Free: Breaking the “Publish or Perish” Spell

At one time, the existence of journals helped advance research. Now, with famous journals and impact factors being over-emphasized, their presence has led to distortions. How can research be swiftly published while ensuring quality? We asked Rebecca Lawrence this and other questions. By separating research from journals, she has built a new way to publish and evaluate research.

* This text is not confirmed by the speaker yet. 

Rebecca Lawrence, Managing Director, F1000

Rebecca Lawrence is the managing director of F1000, a new open science publication platform. She has also helped with other platforms’ recent launches, such as Wellcome Open Research and Gates Open Research. Lawrence is a member of the European Open Science Policy Platform (OSPP), a group of experts that presents recommendations regarding the development and implementation of open science policy in Europe. She is in charge of its work on next-generation indicators and their integrated advice. Furthermore, in addition to being a US National Academies (NASEM) Committee on Advanced and Automated Workflows member, Lawrence has served as the co-chair of data and peer review working groups at the Research Data Alliance (RDA) and ORCID. She is also an advisory board member for FAIRsharing (a data policy and standards initiative) and DORA (the San Francisco Declaration on Research Assessment).

At the publishing platform F1000,(1) you and your colleagues’ focus on freeing research and researchers from the “publish or perish” spell. You also aim to separate the publishing of research from its assessments. I wonder, though, how possible this really is. What is the current situation in the UK and Europe?

Lawrence: I think there’s a growing recognition that “publish or perish” is a real significant problem and that it causes so many issues for researchers and research. It still skews how researchers behave; they know they need to get a specific type of publication in a certain kind of venue for their evaluations and funding. People are also aware of the importance of this problem in the UK. In Europe overall, I think there is an increasing number of both institutions and countries starting to experiment with alternative ways of evaluating research. For example, in Finland, people are very keen to assess research and researchers differently.(2)

I felt that Finland’s recommendations were rational and quite interesting. How were people able to come up with them?

Lawrence: It’s probably that they had the right people in the right positions. Many of the people I know working in policy, libraries, and other key institutions are very much for open science and open research. They’re changing the system. You need all stakeholders to move together—this is a collective action problem. So, I think having the right people in the right places is essential.

This also seems to be the case in the Netherlands. You have Karel Luyben, who heads up the European Open Science Cloud and is the first Open Science coordinator at a national level in Europe. Now Finland is going to copy that and have their own Open Science coordinator.

There’s a meeting in September 2019 that is bringing the EOSC member states together to encourage national-level shifts to open science and reconcile interests between different stakeholders.

F1000 Research: Swift Publishing While Maintaining Quality

PLAN S(3) requires research to be open access. Am I correct in thinking that the movements toward reconsidering the evaluation of research in the Netherlands, Finland, and Europe as a whole are also a driving force of it?

Lawrence: I think there’s some overlap, but they’re separate. Part of PLAN S is changing the journal publishing system to open access, but we argue we need to move away from journals. This is because readers don’t need them anymore in the digital age—you can easily search for and find articles online without relying on associated evaluation metrics like journal brand or impact factor.

As an author, since evaluations are based on citations and the like, you currently need journals for a high score. However, what you really need is a way to communicate your findings straight away, without somebody stopping you and saying that they’re not interesting. And then, after it’s published, you separately do the evaluation and the curation. With this method, evaluation doesn’t influence or hold up the communication of the research.

F1000 Research combines the benefits of preprints—that the researcher can communicate what they want and when they want—and the benefits of a journal, namely, verification through peer review. It also offers other important journal services, like XML-based bibliographic indexing and archiving. These don’t need to be done within the container of a journal, so these tasks are separated out.

That’s quite interesting. What are your thoughts regarding things besides journals, such as books?

Lawrence: F1000 isn’t only taking aim at journals that publish articles. We’ve also been looking at other publications that we call “documents.” They are often from the social sciences and humanities and don’t warrant a peer review. Examples include consensus documents, white papers, technical reports—those kinds of things. Many are hidden on websites and not citable. Many of our partners, such as the Gates Foundation and WHO, are actually very keen for those outputs to be citable and trackable via the likes of DOI.

We’re exploring books and monographs. For example, if you have a system that allows versioning, you could put a book in a space on the platform and update each chapter when it makes sense for that chapter — essentially releasing “living books,” as it were, online.

Is it possible for people outside academia to join F1000 Research?

Lawrence: Yes. Because of our model, we very deliberately don’t judge the scientific quality before we publish. You can imagine that people might end up publishing all sorts of things, though. To make sure we don’t end up with anybody saying anything, F1000 Research’s standard criterion is that at least one author is a researcher at a recognized institution. Alternatively, somebody in such a position has to publicly declare, “Yes, this is a scientific article. It is an article that has some research.” In the social sciences and humanities, many people aren’t affiliated with an institution. This allows them to publish if they get someone who is to help out.

That’s great that people can publish without affiliation getting in the way.

Emphasizing the Content, Not the Medium

Recent developments surrounding open science are also affecting REF.(4)

Lawrence: There are signs of change. Even the last REF actually was very clear that they weren’t using journal names as indicators. But, in the end, people were too nervous to risk not submitting their Nature papers and such for screenings. So everybody did. Perhaps all that can be done to change this is repeatedly saying so. There are examples of researchers being successful without fretting over submitting papers to famous journals. I think it’d be excellent to create exemplars from that and highlight them to others. I’m hoping that people will have more confidence and realize that this is unproductive.

For a long time, the Wellcome Trust, a research funding foundation, had a policy saying that they’re not interested in the venue of publication but simply the output itself. We found this out when working with them. Wellcome actually has staff going to the review panels that decide on the grants and making this policy very clear, but even so, panels’ members don’t change their approach.

Notably, those doing the reviews are also researchers. Publication venues have been used to judge them in the past, and they tend to do it back. Often the earlier career researchers are very keen to change this. The more senior researchers? Not so much. I think we also need to work on educating them.

REF is supposed to be peer review-based and allow for different forms of submission, not only articles. But what you’re saying is that researchers still try to publish articles in famous journals in the end.

Lawrence: Yes. I’m hoping that at the next REF people will feel more confident in the shift, but we will see. Obviously, it’s a slow process because there are a lot of years in between each REF.

Open Research Central’s Undertakings and the Future of Research Evaluations

I heard that F1000 first worked on article recommendations. Could you talk about this?

Lawrence: When we first launched in 2000, the idea was to create a virtual faculty of members who, while reading the literature, identify and recommend articles that they think are really interesting and important. It was always designed to provide a sort of “qualitative peer review”-type metric that uses experts but widely covers various fields’ literature. The idea was that it would replace existing research evaluation methods. It then evolved into an open peer review platform used by many institutions and scholars.

As you know, due to PLAN S requiring open access, scholarly societies are getting very worried about where their income will come from. Societies are the experts in their particular fields, so they are in a perfect position to do a lot of this post-publication curation. This would provide them with an alternative revenue source but also be an incredibly valuable service.

If F1000 becomes a serious force in the market, what will be the response of publishing giants like Elsevier and all those controlling or benefiting from journal rankings?

Lawrence: What we’re trying to do is shift away from journals, but not necessarily shift away from publishers. All we’re saying is that we think publishers would be better as service providers rather than gatekeepers. This is what’s behind Open Research Central. This open research publishing system’s core principles are open access, fair data, the ability to publish immediately, and transparent peer review. Open Research Central is neither for profit nor F1000’s. We just launched it—somebody had to get it started. Actually, though, we’re setting up independent board, which will contain representatives from across the scholarly system.

When you access Open Research Central as an author, the idea is that it would ask, “Who are you funded by? Where are you based? What’s your subject?” Based on your answers, it would then say, “Okay, well, these are the different providers (many of which will be publishers), but some can provide a service to enable you to publish on Open Research Central.” You would then choose one. Kind of like an app store, publishers would apply to be included, to ensure that they’re Open Research Central compliant. Then, they would compete with each other for authors and articles based on their service, their price, and everything else. Maybe someday, some groups will just run the peer review, and other groups help support researchers with the bibliographic data. So it would be a competitive market. You could imagine splitting it up and having different groups specialize in different areas. But there’s no reason why Elsevier, PLOS, and other major publishers can’t do exactly the same thing.

So all services currently linked with journal publishing would be decoupled from it.

Lawrence: Yes. I think, you know, the key to this is that all the primary stakeholders—representatives of major funders, major institutions, major publishers, and so on—come together and say that open research publishing makes up core principles we think researchers should abide by. I think that’s probably the easier way to shift the research evaluation and publication system. Publishers ultimately will follow the demands of the scholarly community. Number of citations won’t matter because it will all be published on the same platform. So I think people naturally will come in with new fresh ideas and indicators.

Changes in this direction are already happening in Europe.

Lawrence: That’s right. We’re already seeing some funders starting to change. Take the example of public health emergencies. It’s quite easy to argue that related findings should be put out immediately. Even if it’s negative data, data must be quickly accessible and in a specific place. The peer-review also needs to be transparent because speed is of the essence. So, I think that way of thinking will also start to push change in the system.

This is easier to understand if you think about what happened with open access. It started slowly, and then some of the funders, like NIH and Welcome, actually said, “If you want our money, you have to publish open access.” That really pushed things on. You could imagine something similar happen here, which would really flip the research evaluation system.

I can imagine. Hope it happens.

The Understanding of Researchers: Indispensable for Open Science

Could you talk about the Open Science Policy Platform

Lawrence: The Open Science Policy Platform, which I’m part of, is a set of representatives across Europe. We offer recommendations to the European Commission about adjusting policy to promote open science in Europe. I think it is quite unique in that it brings together the different stakeholders in one place. You have the universities, funders, publishers, societies—you have everybody together. It’s been challenging at times because, of course, everybody has their own very different perspectives, very different issues. But we try to arrive at a consensus.

Like Europe, Asian countries are also diverse. Could you share your opinion about open science in Asia?

Lawrence: China obviously will have a huge influence. But if they shift, I think it will change a lot very quickly. And again, you see India starting to shift in this kind of direction. So I really hope this will continue.

Actually, there’s a Beijing meeting of the International Science Council’s Committee on Data (CODATA) coming up in September 2019. I’ve assembled a panel from some of the main conferences: from Europe, the US, China, Asia, Africa. We’ll have a panel discussion about how to bring out more global synergies between open science policies.

The challenge when coordinating policy for open science synergy is that you need somebody neutral. Without one, it feels like one party is leading the others, which will never be comfortable for the other party. So I think you need someone neutral that makes everybody feel like they’re on a level playing field. I also understand that the UN is also getting quite interested in this shift to open science. So they might be able to help out with this.

I think that when promoting open science, it’s crucial that people feel comfortable doing research. What are your thoughts?

Lawrence: Yes, of course. Also, I’d like for researchers to think for themselves somewhat about how open science influences their own research. They’re often—understandably—very focused on their research. If you talk to them about open science, people say, “Just let me do my research.” However, it’ll be hard to do this without the understanding of researchers. So I think we all have an important role: explaining and proving that this is beneficial. Right now, we actually don’t have a lot of studies to show this. By continuing to do so, researchers will see that this is better for them, and then they’ll naturally do it.

When promoting open science, we actually encourage a bottom-up approach. In Plan S, it was all top-down. Because of that, researchers couldn’t really be made to engage in the process.

Kyoto University researchers don’t like a top-down style.

Lawrence: Most researchers don’t. (Laughs.) It’s necessary to first explain the effectiveness and then create a mechanism that involves them in the process of shifting or change.

(1) For details regarding F1000, see F1000’s website and the report on the KURA event held at the time of this interview.

(2) For details regarding the content of Lawrence’s talk at Kyoto University, in which she covers new efforts in Finland, click here.

(3) This is an endeavor promoting a shift towards open access publishing. It was proposed by Science Europe, the European Parliament, the European Research Council, and other European national research funding organizations. For details, see its website

(4) REF is the system that carries out the research quality assessment-based weighted allocation of research operations grants for UK universities.