The peer-review process under review with SciRev

SciRevI am probably not going to surprise you much by saying that the peer-review process of research articles could be improved. As I see it, peer-review should be an opportunity for researchers to improve their work. It should help detect flaws in the methodologies used and advise on how to improve the manuscript itself. But sadly, for many, peer-review feels like an unproductive waste of time, with sometimes endless waiting periods when the manuscript is in the hands of editors or slow reviewers. Even more frustrating is when the feedback consists of minimalistic reviews that bring little to the work. But things are slowly changing. Many journals now advertise their quick turn arounds and their network of competent and responsive reviewers. But with thousands of research journals in activity, it can be hard for researchers to know which journal is doing a good job at it and which are not.

SciRev is a platform that allows researchers to publish reviews of their experience with the peer-review process of specific journals. Facts including: the duration of the first review round, total handling time, number of reviewers, quality of the reviews, and an overall rating are displayed. A very useful “motivation” section also allows users to explain a bit about their experience with the journal. This usually gives interesting insights on the peer-review process. SciRev also allows you to compare several journal side by side.

Review of journals by users.

Review of journals by users.

The reviews seemed to be well balanced between the positive and the more negative. SciRev also gathers very valuable data that could help grasp the problems in the publishing sector. Based on the user data, SciRev also provides interesting statistics about the review process within various scientific fields (see below).

Rejection time and duration of the first round of review averaged for all scientific fields

This initiative, along with others that help researchers choose the right journals to publish in, will likely play a role in improving the publishing system. Indeed, the cost supported by universities associated with publication and with subscription to journals, should come with a matching service. For the price, we should expect services such as open access, innovative ways of communicating the research, and an efficient peer review process. Such journal-rating platforms could become key to identifying the journal that are ahead of the game.

Journal Guide helps you find, compare and rate journals

Screen Shot 2014-02-03 at 1.03.11 PMSo, after years at the bench, months fighting with your co-authors about the wording of the second phrase of the 5th paragraph, you are ready to publish! The question is, where should we publish the paper? With over 25,000 journals to choose from, the possibilities are plentiful and can be overwhelming. And for your paper to have impact, you must find your audience and thus find the journal that your audience reads…

Journal Guide is a platform that helps authors navigate through this profusion of scientific journals. It asks for your paper’s title and abstract, then extracts the important keyword and identifies a series of journals that seem to be a good fit.

Screen Shot 2014-02-03 at 12.59.32 PM

Example of search result in Journal Guide

The results are displayed in a table (see image above) with the search score, journal name, publisher and impact factor. For each journal, Journal Guide also identifies published articles that are related to your title and abstract. If others in your field have chosen a particular journal, you might want to consider it as well. Once you have chosen a couple of journals that seem appropriate, Journal Guide offers a tool to compare their characteristics side by side (see screen capture below).

Screen Shot 2014-02-03 at 1.00.12 PM

Comparison of three different journals

At any time during your search, clicking on the name of the journal will display another layer of information. There, one can learn about the journal’s aims and scope, costs and open access policies. But even more interesting is the ability of users to provide anonymous feedbacks about their personal experience with the journal. Information such as the speed of publication can be particularly useful.

Another online service provided by Edanz also helps authors decide on a journal by analyzing title and abstract. Journal Guide pushes the concept further by offering user accounts, side by side journal comparisons and journal rating. Although still in new and in beta version, Journal Guide has the potential to help create a more healthy competition between journals by making it easier to compare them and can already help young researchers better promote their research by choosing the right journal.

Out of curiosity, I have tested a few of my publications. Entering the title and abstract and looking down the list to see how the journal that I have selected are ranked by Journal Guide.  Some of my articles came up with the journal I published in as first choice. Others did not even show the journal they are published in. Perhaps this is a sign that the choice of journal can be quite irrational some times. Journal Guide could help us make more objective decision.

Journal Guide is a division of Research Square, a for profit organization also creators of AJE and Rubriq.

Find, organize and discuss papers with Journal Lab

JournalLab_LogoJournal Lab popped-up on my radar this week. In a way, Journal Lab is similar to PubPeer since it allows users to post comment on research papers and start a discussion.

But Journal Lab adds a little twist to that by also enabling users to comment on specific figures. In the case of open access journal such as PloS, the figures are displayed along with the “reactions” and comments from readers. This might help start more targeted and clearer discussions.

Comment on article and article figures.

Comment on article and article figures.

Journal Lab goes beyond post-publishing reviewing and discussion by offering a “paper collections” functionality along with a convenient alert service. Journal Lab also introduces the concept of virtual journal club. With virtual journal clubs such as “RNA and Epigenetics” or “Active on PloS”, an exciting new paper is selected every week and opened for discussion.

Build a article collection and setup alerts.

Build a article collection and setup alerts.

Journal Lab was co-founded by UCSF graduate Robert Judson and social media entrepreneur David Jay in 2011. Find a recent interview by the UCSF student news paper here.

Comment on published manuscripts with PubPeer

Logo_PubPeerThe other day, I ran into PubPeer, which allows readers to comment on publications. Here’s a description directly taken from the “about” section:

PubPeer seeks to create an online community that uses the publication of scientific results as an opening for fruitful discussion. 

  • All comments are consolidated into a centralized and searchable online database. 
  •  Authors, as well as a small group of peers working on similar topics, are automatically notified when their article is commented on.
  • Pubpeer strives to maintain a high standard of commentary by inviting first and last authors of published articles to post comments.
  • The chief goal of this project is to provide the means for scientists to work together to improve research quality, as well as to create improved transparency that will enable the community to identify and bring attention to important scientific advancements. 

PubPeer is democratizing the peer review process. This is driven by the idea that publishing research results should be open to all since publishing costs are driven down by massive digitization. However open discussions and reviews should be retained to assure good science and generate new ideas.

Shifting the peer review process from before to after publication is an ongoing effort shared by others. The idea is usually to first build a community around a collection of papers then get discussion started.  I love to concept, but feel like the system is taking its time to get adopted by the masses. Why is that? Could it be because the communities are too small? Because they are too diverse maybe? Or perhaps because such comments are not taken into account to measure research impact?