Hello fellow journalologists,
This is Thanksgiving weekend across much of North America and so I hope the open rates for this newsletter will be much lower than usual. Family should always come first.
I'm grateful to those of you who read this email every week. Please encourage your colleagues to sign up if you think they would enjoy receiving my weekly musings on the scholarly publishing industry. They can sign up here:
https://journalology.ck.page
Brands and the Matthew effect
Earlier this month the Journal of Informetrics published a paper titled “Accidentality in journal citation patterns”. I can’t claim to have a read it in detail (the mathematical equations are beyond my level of expertise), but the core finding is important for all editors and publishers to be aware of. In the authors’ words:
In particular, we are concerned with the question whether the most highly ranked journals receive citations based on the said rich get richer principle, which is known to lead to high
inequalities in the citation distributions.
They conclude:
Our results show that citations to more “impactful” journals (i.e., journals that attract more citations per article) are less accidental than citations to less cited journals.
There might be an intuitive explanation for this phenomenon: when authors prepare lists of bibliographic references, they are of course more likely to include, in the first place, the most
highly cited papers (which of course does not say anything about their true quality) in the leading journals. These references are chosen carefully and with intent, which is exactly what we call the preferential attachment. Still, the authors can also include some less cited but still relevant or even not relevant at all references. This in turn may be thought of as an accidental distribution of citations.
There’s another example of this preferential citation effect, which I’ve referred to regularly over the years but it’s worth repeating here because the anecdote shows: (1) how imperfect impact factors are and (2) the importance of brand for journal editors and publishers.
The International Committee of Medical Journal Editors (ICMJE) — an invite only organisation consisting of the editors from the leading general medical journals — occasionally publishes editorials on topics that concern all of the members. Those editorials are often published simultaneously in multiple journals. The text is the same. The authors are the same. The publication date is the same. The only difference between the articles is the journal they are published in. In short, this is a perfect example of a controlled experiment.
Many years ago, when I was an editor at The Lancet, I looked at the citations that a 2004 editorial on clinical trial registration (which was simultaneously published in 11 journals) received to see whether there was any correlation between citations and the journal's impact factor. Perhaps unsurprisingly, the correlation was strong.
Stuart Cantrill, one of the Editorial Directors on the Nature journals, did an excellent job of depicting the correlation graphically. Here’s what he found when he plotted the impact factor of the journal that published a 2007 ICMJE editorial on clinical trial registration against the number of citations that the article received.
The blog post that Stuart wrote back in 2016 can be found here. In Stuart’s words:
There’s a pretty good correlation between the number of citations that this identical paper received in each journal with the IFs of those journals. Of course, perhaps more people read the New England Journal of Medicine than the Medical Journal of Australia and so a wider audience will likely mean a wider potential-citation pool. Whatever the reasons (and it’s not all that difficult to come up with others), the slide shows how silly it is to assume that the IF of a journal has any bearing on how good any particular paper in that journal is. As I have said before, the only way to figure out if a paper is any good is to actually read the damn thing – the name (or IF) of the journal in which a paper is published should never act as a proxy for how awesome (or not) a paper is.
An important conclusion from this anecdotal observation is that impact factors are deeply flawed. But we all knew that anyway. For me, the more interesting message relates to the importance of brand for journal publishers.
Human beings constantly signal to each other based on the cars that we drive, the phones that we carry, and the journals that we cite. Brand identity is in intrinsic part of how we operate as social beings. Science is a human construct; how we communicate science follows the same social rules.
Good brands generate trust. Trusted journals receive submissions, are read (providing value to authors and subscribers) and are cited. One possible long-term future is that every research paper is published in a common central database, which would remove branding bias in the system. But for now, the Matthew effect is with us and less well known brands need to do whatever they can to increase trust and improve awareness of their journals and publishing programmes.
A few months ago I wrote about Brand Extension in The Brief, the monthly newsletter from Clarke & Esposito. We're finalising the November issue now and it's rather good, even if I do say so myself. Please subscribe now to avoid disappointment.
Briefly quoted
First, addressing the problem of speed. The demand for instant publication of new research was intense during the COVID-19 pandemic. So intense that peer review was routinely bypassed with an explosion of preprints. While understandable and probably necessary, I have misgivings. At The Lancet we see on a daily basis the value peer review brings to improving the science we publish. Omitting peer review has a cost. Second, solving the problem of volume. The tidal wave of research papers that COVID-19 triggered may have reflected the remarkable agility of science to pivot during a crisis, but the pandemic also revealed that science today has a curation challenge. We have not developed effective means to select, organise, and present new research in a way that optimises understanding and application.
The Lancet (Richard Horton)
Using paper-level data from 80 medical journals, we show that researchers at highly ranked institutions have increased their publication speed slightly more in 2020 than have scientists at less reputed institutions. However, the average difference is small, and should be interpreted with caution (more on this below). We observe a more substantial status-related difference for COVID-19 research (compared to research on other topics), with papers from top-ranked institutions seeing faster review times than papers from lower-ranked institutions. Moreover, survival plots indicate that scientists with prestigious affiliations have benefitted the most from fast-track peer reviewing and especially so in journals with single-blind review procedures. Finally, our analysis of gender-related changes in publishing speed indicates small and inconsistent effects although we observe a slight difference in the average review time of COVID-19 papers first authored by women and men.
PLOS One (Claudia Acciai et al)
Following a successful trial, Springer Nature is extending its partnership with Code Ocean to better integrate code deposition and peer review with the manuscript submission process. Authors from select Nature portfolio titles will now have the option to share their code and data using the code ocean platform when they submit to one of the participating journals, and receive expert support to do so.
Springer Nature press release
There are long-standing concerns that scientific institutions, which often rely on peer review to select the best projects, tend to select conservative ones and thereby discourage novel research. Using peer review data from 49 journals in the life and physical sciences, we examined whether less novel manuscripts were likelier to be accepted for publication. Measuring the novelty of manuscripts as atypical combinations of journals in their reference lists, we found no evidence of conservatism. Across journals, more novel manuscripts were more likely to be accepted, even when peer reviewers were similarly enthusiastic. The findings suggest that peer review is not inherently conservative, and help explain why researchers continue to do novel work.
PNAS (Misha Teplitskiy et al)
Crucially, by getting rid of that accept/reject decision, we are decoupling the scrutiny of the review process from the ability to get published. In doing so, we can stop turning recommendations from peer reviewers into requirements that authors must comply with in order to get accepted – a process that can also remove ideas and insights from scientific discourse. Our aim is to expose the nuanced and multidimensional nature of peer review, rather than reduce its function and output to a binary accept/reject decision.
Upstream (Martin Fenner interviews Fiona Hutton)
The point is we are trying to move away from a world where “getting into” the journal has any value whatsoever. The only value is in the review we do. Now, of course some papers are more likely to get attention than others, and therefore have more of a need to be reviewed than others. So, we are using our limited capacity to focus on those papers for now. In the future we would like to go beyond our existing offering and review more papers, but we can’t be expected to do that from the outset. The important thing is that there is no link between the venue and the assessment, and therefore the idea of “getting into” _eLife_needs to be seen as far less significant than what we actually say about the paper.
The Scholarly Kitchen (Alison Mudditt interviews Damian Pattinson)
PRC [Publish, Review, Curate] is effectively a hybrid approach combining community-evolved preprint review processes and traditional journal processes. The preprint and community review processes assist with the early discovery and interpretation of research, while journal titles are maintained in this ecosystem as they are important “…for many researchers as they pursue their careers.” Journals also act as a secondary mechanism for improving the communication of the results and building trust via careful curation and recommendations.
Commonplace (Adam Hyde, Damian Pattinson, and Paul Shannon)
We surveyed 997 active researchers about their attitudes and behaviors with respect to methods sharing. The most common approach reported by respondents was private sharing upon request, but a substantial minority (33%) had publicly shared detailed methods information independently of their research findings. The most widely used channels for public sharing were connected to peer-reviewed publications, while the most significant barriers to public sharing were found to be lack of time and lack of awareness about how or where to share. Insofar as respondents were moderately satisfied with their ability to accomplish various goals associated with methods sharing, we conclude that efforts to promote public sharing may wish to focus on enhancing and building awareness of existing solutions—even as future research should seek to understand the needs of methods users and the extent to which they align with prevailing practices of sharing.
MetaArXiv Preprints (Marcel LaFlamme, James Harney & Iain Hrynaszkiewicz)
Once they have deposited their preprint in a preprint server or in an open repository, authors can submit it to one of these thematic PCIs [Peer Community In]. One of the many recommenders (the equivalent of associate editors in traditional journals) can decide to organize its peer-reviewed evaluation. If the preprint is accepted, the corresponding PCI publishes a recommendation text and the peer-review history (reviews, decisions, and author responses). The PCI-recommended preprint has the same value as any journal article, but it can still be further published in a journal including Peer Community Journal if the authors wish to do so.
Commonplace (Marjolaine Hamelin, Denis Bourguet, and Thomas Guillemaud)
[Charlesworth] Gateway is a plug-in tool that allows publishers to integrate their submission systems with their WeChat channel, either via a mini-program or direct within their WeChat channel. Gateway has been developed to improve the Chinese author communication and publishing experience by sending automated unique short notifications in Chinese to the author throughout the submission process via this dominant social media app.
Commonplace (Jean Dawson and Andrew Smith)
Double-anonymised peer review has far-reaching implications for publishing models. It hinders the adoption of important open science practices, including fast dissemination through preprint servers, early sharing of protocols and data sets, and transparency about competing interests. As such, double-anonymised peer review impedes innovation in scholarly publishing. Crucial innovative developments such as preprinting,
preprint peer review, the
publish-review-curate model, and
micro-publications are all incompatible with double-anonymised peer review.
Impact of Social Sciences (Serge Horbach, Tony Ross-Hellauer & Ludo Waltman)
In this blog, I explain my role in the formation of COPE in 1997 and how I was informed that I was no longer a member of COPE two days after I complained to COPE that the Lancet was behaving improperly by refusing to retract Macchiarini’s fraudulent publications. I also describe how, in January 2019, COPE agreed to investigate my complaint that the Lancet had behaved improperly by failing to retract the falsified publication. Almost four years later, COPE has failed to tell me the outcome of that investigation.
Dr Peter Wilmshurst blog
Breaking down peer review into modular steps of quality control could improve published science while making review less burdensome. Every article could receive basic checks — for example, of whether all data are available, calculations hold up and analyses are reproducible. But peer review by domain specialists would be reserved for articles that raise interest in the community or are selected by journals. Experts might be the best people to assess a paper’s conclusions, but it is unrealistic for every article to get their attention. More efficient, widely applicable solutions for quality control would allow reviewers to use their time more effectively, on papers whose data is sound.
Nature (Olavo B. Amaral)
Thank you for reading until the end. Hopefully that means that you found this newsletter to be helpful. Please do share this email with your colleagues if you think they would benefit from reading it too. The sign up page is here:
https://journalology.ck.page
Until next week,
James