Journalology #8: Gatekeeping


Hello fellow journalologists,

You’re receiving this email because you signed up to ‘Journalology’, a newsletter about the art and craft of editing scholarly journals.

Every writer wants a wide readership, so please forward this email if you think others may benefit from it. I'm especially interested in trying to reach academics who serve on editorial boards. They can sign up here:

https://journalology.ck.page

Gatekeeping

At the end of last week eLife announced that it's changing its publishing model. My colleague, Michael Clarke, has drafted an excellent analysis of the announcement, which will be sent to subscribers of The Brief next week (so sign up to The Brief now to avoid disappointment).

There’s a lot to say about the announcement; in this newsletter I want to focus on the editorial aspects, as the model fundamentally changes the role of an editor from a gatekeeper to a grader.

For those of you who are not familiar with the eLife announcement, the new process is relatively simple. The editors of eLife will continue to send out 30% of the papers that they receive (as preprints) out for review, but won’t make an accept or reject decision. It will be up to the authors to decide, after receiving the peer review comments, whether to incorporate any changes; the authors get to choose whether to make a 'version of record' of the paper. All authors selected for review will pay $2000 as a peer review charge (rather than the $3000 that they pay for an APC now). The full workflow is summarised in an infographic here.

Mike Eisen, the Editor-in-Chief of eLife, claims that there are no gatekeepers in this new system, which isn’t entirely true since the editors won’t have capacity to peer review every paper and so will desk reject 70% of the preprints that are “submitted” to them. It’s worth noting that the editors won't select the ‘best’ 30% to send out for review.

The eLife editors will write an eLife Assessment on each peer reviewed paper that will use a controlled vocabulary to assess two things:

The significance of the findings:

  • Landmark: findings with profound implications that are expected to have widespread influence
  • Fundamental: findings that substantially advance our understanding of major research questions
  • Important: findings that have theoretical or practical implications beyond a single subfield
  • Valuable: findings that have theoretical or practical implications for a subfield
  • Useful: findings that have focused importance and scope

The strength of support:

  • Exceptional: exemplary use of existing approaches that establish new standards for a field
  • Compelling: evidence that features methods, data and analyses more rigorous than the current state-of-the-art
  • Convincing: appropriate and validated methodology in line with current state-of-the-art
  • Solid: methods, data and analyses broadly support the claims with only minor weaknesses
  • Incomplete: main claims are only partially supported
  • Inadequate: methods, data and analyses do not support the primary claims

Here’s an example of an eLife Assessment:

This landmark study provides a comprehensive morphological and molecular description of the majority of documented neuronal cell types in the mouse cortex. This provides an extraordinary resource that will be invaluable to the whole neuroscience community. The methodology for combining expansion microscopy with spatially resolved transcriptomics across tissues is exceptional and establishes a new standard in the field.

The most unusual aspect of the eLife process is that authors will be able to “publish” (i.e. make a version of record) of a paper even if the referees say that it is fundamentally flawed or has weak methodology. Or put another way, eLife will provide a home for every paper that's peer reviewed with the editors assigning a grade to each paper that’s published. I can’t help but wonder whether the eLife website will allow readers to filter the papers so that they can see those that are graded “landmark AND exceptional”, for example.

Broadly speaking, I can see merit in eLife's approach and I'm glad to see the experiment being done. After all, the current journal ecosystem is essentially a grading process that is incredibly inefficient. Anyone who tries to improve the system deserves credit, especially since it is not without risk to the journal’s brand and the livelihoods of the people working for the publisher.

I'd argue that the eLife model isn’t that different from the Guided Open Access experiment that some of the Nature journals did recently (see this Nature Physics editorial for a summary). The Guided OA Editorial Assessment Report is a thing of beauty (yes, I’m biased; here’s a good example of an EAR) and also provides editorial feedback on the reliability and impact of the work.

eLife’s new system is indicative of a trend in scholarly publishing, which centres around authors’ and funders’ desire to be in control of the act of publication. For example, the Wellcome Open Research (WOR) website claims that a key benefit for researchers is that WOR “enables authors, not editors, to decide what they wish to publish”.

Back in 2019 Bodo Stern and Erin O’Shea from Howard Hughes Medical Institute (HHMI; a funder of eLife) wrote a thought-provoking paper in PLOS Biology entitled A proposal for the future of scientific publishing in the life sciences.

The independence of scientists is at the heart of the research enterprise. Indeed, academic scientists lead the design and the execution of their own research plans after obtaining a principal investigator position and funding. This concept that scientists are in charge of the research process should be extended to the final step of the research workflow—the dissemination of the primary research results. Today, journal editors decide when primary research is published. Shifting the publishing decision from editors to authors would fundamentally change the roles and motivations of authors, peer reviewers, and editors and open the door to publishing and evaluation practices that, we believe, are right for the digital age.

The move from subscription to open access has undoubtedly shifted the focus from readers to authors. That has some advantages, but there are also significant drawbacks especially if it means that researchers (with their ‘reader’ hat on) struggle to identify reliable information from the fire hose that’s directed at them. After all, editors have a vital role as sewage workers.

Briefly quoted

CSE Recommendations for Standards for Critiques/Responses to Published Articles

In the last 20 years, journals have moved toward an online version of correspondence that can be submitted more rapidly yet still screened before being posted. Some journals will consider those online comments for publication, whereas other journals may consider the comments to be a forum for engaging in public discussion. More recently, comments about journal articles have moved to social media platforms, such as Twitter, allowing for unscreened comments to be posted to a public audience.

Source: Science Editor


When editors confuse direct criticism with being impolite, science loses

However, there is nothing impolite or unfair about saying a particular analysis was incorrect, wrong, or invalid and therefore that the conclusions stemming from it are either invalid or unsubstantiated. Editors’ struggle to differentiate impoliteness from directness could partly be related to a notion we call “the second demarcation problem”: Some editors have a difficult time (or are unwilling) to distinguish unequivocal errors from matters of subjective scientific opinion. The former must be corrected, whereas the latter merit scientific debate.

Source: Retraction Watch


Elucidating the effects of peer review: a living synthesis of studies on manuscript changes

Current results indicate that submitted or pre-printed manuscript versions and their peer-reviewed journal version are very similar, with main (analysis) methods and main findings rarely changing. Quantification of these results is pending. Large differences between studies, type of changes, and methods with which they were measured indicate greater need for collaboration in the peer-review field and for core outcome measures for manuscript version changes.

Source: PUBMET


COVID research is free to access — but for how long?

Publishers have varying views about their longer-term strategy, however. Some contacted by Nature, including SAGE Publishing in Thousand Oaks, California, and the NEJM Group in Waltham, Massachusetts, indicated that they had no plans to put COVID-19 content behind a paywall, but would keep it free-to-view permanently. Others were more circumspect. “We plan to carry this on as long as the public-health emergency is ongoing,” Elsevier said. A spokesperson for Springer Nature gave a similar statement. Wiley, meanwhile, wrote that “our COVID collection remains available through the end of 2023”.

Source: Nature


Nationwide research integrity survey launched in the UK

A survey of researchers throughout the UK to learn more about their attitudes and understanding of issues around research integrity and the degree of institutional training provided commences today. The survey is being undertaken by Springer Nature with the aim of providing insight to research institutions and funders. It follows the inaugural survey that was carried out by Springer Nature in partnership with the Australian Academy of Sciences in Australia in 2021, with contributions from researchers at all levels of seniority, at a total of 34 universities and other research

Source: Research Information


Catch and kill: What it’s like to try to get a NEJM paper corrected

At the end of a year-long saga, we are left wondering if there would have been a better way to provide a faster, more enduring response to the flawed manuscript. We wonder what policies journals can agree on to move with more speed to take appropriate corrective action and alert the scientific community to legitimate concerns raised on their publications.

Source: Retraction Watch


Guidelines on Inclusive Language and Images in Scholarly Communication

The Guidelines on Inclusive Language and Images in Scholarly Communication are an expansion of C4DISC’s Toolkits for Equity project. They came together due to the growing need for more comprehensive and global guidelines to help authors, editors, and reviewers recognize the use of language and images that are inclusive and culturally sensitive. As the Guidelines will be updated annually, we call on the entire scholarly publishing community to help grow and improve them over time by suggesting new references, recommendations, and resources.
This new toolkit is meant to be a global tool, educational resource, and living archive to help all authors, editors, and reviewers recognize the use of language and images that are inclusive and culturally sensitive. The Guidelines can be used at various steps of the scholarly publishing process, such as manuscript writing, peer review, and presentation of published output.

Full guidelines are here: Introduction · Guidelines on Inclusive Language and Images in Scholarly Communication

Source: C4DISC


It Isn’t Fake Science, Because It Isn’t Science at All. It’s Dupery

Is fraud a subcategory of poor quality? Or, is it something different? The group came to relative consensus that there is a significant and important difference between scientific work that is poor quality, but still scientific, and work that has been made to appear as if it is scientific for deceitful and deceptive purposes. If we are to deny this later a place within the category of science, we need a term that does not present it as a kind of science, which “fake science” inadvertently does. We challenged the group for alternative terms and then shared our proposal – that we call this category of fictitious, ill intentioned, works, “dupery.”

Source: The Scholarly Kitchen


Preprints and open preprint review: a workshop on innovations in scholarly publishing

Rather than taking on additional functions, journals choose to unbundle their services. The dissemination function of journals is going to be performed by preprint servers (although the term ‘preprint’ may no longer be appropriate), while the evaluation function is going to be performed by peer review platforms. In the most extreme variant of this scenario, journals completely cease to exist and scholarly publishing takes place entirely on preprint servers and peer review platforms.

Source: Upstream


ALPSP Mentorship scheme

Our mentoring scheme connects and supports publishing industry colleagues with a wider network of expertise and skills to help foster development and progression. By facilitating these conversations, we’re taking an active role to help individuals learn and share experiences with each other. The scheme encourages collaborative learning between the mentor and mentee, whereby both will benefit from the dialogue, and share knowledge, insights and experiences.

Source: ALPSP


Editorial wisdom

Given the topic of today's newsletter it would be churlish not to give Mike Eisen some time in the “editorial wisdom” spotlight. I am sure this will be the highlight of his career.

Until next week,

James

Journalology

The Journalology newsletter helps editors and publishing professionals keep up to date with scholarly publishing, and guides them on how to build influential scholarly journals.

Read more from Journalology

Subscribe to newsletter Hello fellow journalologists, This issue is slightly delayed, so there’s a lot to catch up on. We start off with two stories about research integrity sleuths and then delve into the implications of the NIH access policy. Oh, and the first Springer Nature AGM was held last week, which provides a fascinating insight into how a management team at a commercial publisher is incentivised by its shareholders. But first, please take a look at the message from this week’s...

Subscribe to newsletter Hello fellow journalologists, Some weeks are slow news weeks. Last week was not one of them. But before we get to the news, here’s a message from Scholastica, which is kindly sponsoring Journalology over the next four issues. Thank you to our sponsor, Scholastica Looking for a better journal submission and editorial management system? There’s no need to settle for expensive, complex legacy software. The Scholastica Peer Review System has the features you need for...

Subscribe to newsletter Hello fellow journalologists, The SSP (Society for Scholarly Publishing) annual meeting starts on Wednesday; many companies have been announcing partnerships and new products, ready for discussion at the jamboree. I’ve grouped those together, at the end of the news section, to help you quickly see whether any of the new initiatives can help you to improve your journal or portfolio. Another way to produce better journals is to get one-to-one support via the Journalology...