New Impactstory: fresh and free!

Impactstory-logo-2014Impactstory tracks the online impact of your research. It looks through news outlets, social media mentions, and more to quantify the reach of your research output. Impactstory is one of the first startup founded around the idea that a new set of metrics is needed to properly evaluate scientific research and researchers. The digitalization of research and scholarly communication is an amazing opportunity to harness very large quantities of quantifiable data, which can give completely new insights in the impact of research. Many now talk about altmetrics, a term originally coined on Twitter by Jason Priem, co-founder of Impactstory. These new metrics are still young and will need a few rounds of trial and error to find out what information and what representation of the information are the most meaningful. But regardless, altmetrics are bound to become essential for the future of research evaluation.

mentionsThe new profile page has a very fresh and clear look. Login is now only through ORCID, the unique identifier system for researchers. Then within seconds, Impactstory recovers your published articles and generates an overview of your mentions, which give you numbers on your online reach. But Impactstory tries to give perspective to these number though what they call achievements. These are badges focused on

  • the buzz your research is creating (volume of online discussion),
  • the engagement your research is getting, which looks at the details of who is mentioning you, and on what platform.
  • and your research’s openness, which look at how easy it is for readers to access your work.

For many of these badges, Impactstory also tell you how well you are doing compared to Softwareother researchers. One particularly interesting badge is about software reuse. There, Impactstory has integrated a tool that they recently released called Depsy. Depsy is specialized in evaluating the impact of research software, going beyond formal citations to understand how research software are being reused and to give proper credit to its contributors. This will deserve a post of its own in the future.

Hopefully, these sets of metrics and others alike, will become a standard part of your performance reviews, grant applications, and tenure packages in a very near future. You can already share your profile by directly pointing to your public Impact Story url. But new features will come shortly to make it easier to share and showcase the story of your online impact.

Going beyond impact factor to evaluate researchers with Profeza

ProfezaFor many reasons, journal impact factor and number of publications are not good metrics to assess the quality of a researcher’s work. But regardless of their increasingly bad reputations, these metrics are nearly invariably used to take decisions about recruitments of researchers, their promotions, and funding their projects. The obvious reason why nothing has changed over the years is because there are no other easy way to judge the quality of a researcher and his or her work.

We ask a lot of researchers. They must be great at scientific reasoning and have bright insights but also be able to properly communicate with their teams, with the scientific community, with the general public, and with industrial partners. They also need to be able to network and work within teams, to manage projects and people, to teach, and to write skillfully in a language that is often not their own. It is easy to see that we would need a multitude of alternative metrics to properly evaluate the various aspects of the day-to-day work of researchers. 

Profeza is a young startup that would like to provide decision makers a better overview of the work of researchers. It has launched a social journal that allows researchers to showcase the divers aspects of their work by sharing the rational of experimental design, the failed hypotheses, as well as raw data, repeat data, and supporting data that would otherwise often go unpublished. For Profeza, each scientific article is only the tip of the iceberg, standing on a immense amount of work. 

Profeza’s interface is simple and clear. First, find the publications you authored through Profeza’s search engine. Profeza’s is currently using the Pubmed database and is thus better optimized for researchers in the biomedical fields. Then in three steps you are prompted to add information to the publication:

1. Select the publication you wish to add information to.

Share contributions2. Describe your contribution to the paper and invite other authors that may not be in the author list but should get recognition for their involvement in the work.

What contribution

3. Add information. You can add text and files containing the details about the rational of design, failed hypotheses, raw data, repeat and supporting data. This is a great way to help others in your field by tell them about your failures or negative results.

Additional data

The end result is a personalized page for each article containing the additional data and information. The page gives a better picture of the work that went into the publication and provides an insight in the short term impact of the articles by displaying altmetric data. 

I think Profeza is addressing a real problem head-on. The success will of course depend on the willingness of researchers to spend time formatting and entering the information and datasets. But if institutions are willing to play along, then the incentives would be in place and a more adapted evaluation system could emerge. These are still the early days. Profeza was founded in 2014 and expects to roll out new functionalities in the near future.

Also check out this well-crafted video from Profeza which gives a nice background on journal impact factors and the problems associated with them.

Ending authorship wars with a standard

Screen Shot 2014-11-14 at 3.32.30 PMIn research, most of the time this question does not come up until it is time to publish. Who did what? This question is essential, it will determine how authorship is distributed and ultimately how credit for the work is attributed. But very often, this information is not communicated, and although first authors are generally the do-ers and last authors the managers, there is a sea of unknowns between the two. This makes judging achievements based on authorship incredibly unreliable. PLOS journals and others already require precise descriptions of how authors contributed to the work. However terminologies can vary across journals, which prevents any real use of the information to assign credit.

In an attempt to solve the issue, the Wellcome Trust (Liz Allen) and Digital Science (Amy Brand) launched a new project called CRediT (Contributor Roles Taxonomy) last June. CRediT is now proposing a standard taxonomy composed of 14 defined roles such as “conceptualization”, “resources”, “supervision”, “writing – review & editing”… You can view them all here.

The CRediT project is now asking everyone to provide feedback on the taxonomy. If researchers show their interest in such a standard by helping to define it, there is more chance that journals will pick it up. And eventually that actual credit and career advancement are based on this system. So don’t hesitate to speak out your mind and spread the word.

A quick update on independent peer-reviewing platforms

There has been an increasing number of independent peer reviewing platforms emerging over the past few years, all based on similar principles. Typically, authors submit their manuscript before publication. Users can then comment on the work or write a more thorough review, giving suggestions for improvement. Many of these platforms also allow users to rate the reviews which drives up the quality of publication and reviews. These initiatives bypass the authority of publishers in determining what is or is not publishable. The hope is that a more open and independent reviewing systems will lead to better science. Here are four of these platforms that I have added to the list of Online Tools for Researchers

  • Peer Evaluation – Allows authors to upload data, articles and media and have them openly accessible and available for review and discussion by peers.
  • Peerage of Science – Has the particularity of bridging the independent peer-review process with direct access to publishing in partner journals if successful with review process.
  • Paper Critics – Connects to Mendeley accounts and allows everyone to review the work of others
  • Libre – Another participative reviewing platform (see video bellow). This tool is not launched just yet, so be on the lookout for updates.

Publons set to revolutionize peer review in physics

Screen Shot 2013-04-11 at 7.46.35 PMPublons is another great alternative or complement to the traditional peer review process. Like others, this service is an answer to the slow and rather opaque peer-review process, in which the fate of a manuscript is to the mercy of an anonymous pair of experts. The idea is that publishing research results should not be the limiting step. Papers should be published, then reviewed and commented-on by the readers. This sort of system would allow researchers to have a direct, rapid and interactive feedback on their work.

Andrew Preston and Daniel Johnston, described in their founding article that publon are facetious particle that is to academic research what an electron is to charge. Peter Koveski first described them as “[…] the elementary particle of scientific publication. It has long been known that publons are mutually repulsive. The chances of finding more than one publon in a paper are negligible. Even more intriguing is the apparent ability of the same publon to manifest itself at widely separated instants in time. One reason why this has not emerged until now seems to be that a publon can manifest itself with different words and terminology … defeating observations with even the most powerful database scanners.”

As you might have guessed, Publons is focused on physics manuscripts. It allows researchers to comment and review paper published on the pre-print repository arXiv and a list of top physics journals (Applied physics letters, Nature, PRL…).

Users can review, discuss and rate papers, and can also create a profile page gathering their contributions as well as their own publications. Once more, Publons’ success will largely depend on the size of the community that it can attract. So, have a look and share the word!

The tool was added to the list of Online Tools for Researchers

 

Rubriq: pre-publishing peer review service now in phase 2 of beta testing

rubriq-logoFollowing the success of it initial beta testing, Rubriq, the pre-publishing peer reviewing service (see blog post), is launching phase 2 of its beta testing.

This means opening up to over 200 biological and medical fields. This beta testing phase also comes with two new services, journal recommendation report and plagiarism check. Rubriq is now welcoming manuscript submission as well as applications to become a (paid) reviewers.

See original press release.

 

 

Comment on published manuscripts with PubPeer

Logo_PubPeerThe other day, I ran into PubPeer, which allows readers to comment on publications. Here’s a description directly taken from the “about” section:

PubPeer seeks to create an online community that uses the publication of scientific results as an opening for fruitful discussion. 

  • All comments are consolidated into a centralized and searchable online database. 
  •  Authors, as well as a small group of peers working on similar topics, are automatically notified when their article is commented on.
  • Pubpeer strives to maintain a high standard of commentary by inviting first and last authors of published articles to post comments.
  • The chief goal of this project is to provide the means for scientists to work together to improve research quality, as well as to create improved transparency that will enable the community to identify and bring attention to important scientific advancements. 

PubPeer is democratizing the peer review process. This is driven by the idea that publishing research results should be open to all since publishing costs are driven down by massive digitization. However open discussions and reviews should be retained to assure good science and generate new ideas.

Shifting the peer review process from before to after publication is an ongoing effort shared by others. The idea is usually to first build a community around a collection of papers then get discussion started.  I love to concept, but feel like the system is taking its time to get adopted by the masses. Why is that? Could it be because the communities are too small? Because they are too diverse maybe? Or perhaps because such comments are not taken into account to measure research impact?

Peer reviews of your life science research products with BenchWise

Screen Shot 2013-01-30 at 6.53.13 PMFellow postdocs and graduate student of Standford have put together an interesting online tool. It started by the ascertainment that a lot of scientist are tired of knowing nothing about the quality of products before buying and testing them. This applies in particular to antibodies, used in mainly bio-related fields to detect specific parts of proteins, sugars or lipids. Antibodies are known to not always “work” for certain applications, and for reasons often unknown. Testing by trial and error, by purchasing similar antibodies from various suppliers is the only way to go. That’s where BenchWise kicks in.

BenchWise is platform that help you find peer reviews of life science research products, mainly antibodies for now. The goal it to help scientists find the right tools fast. For now the site has a list of hundred of antibodies, targeting different antigens and produced by various manufacturers  The users can give feedback for specific antibodies, for example explain if it works properly for certain application (wester blots vs immunofluorescence). Each item can also be discussed.

This is addressing a clear gap in the kind of information that is available to scientists. It is extremely easy to get reviews of the latest iphone, but so much more difficult to get a real idea of the quality and properties of the “for science only” reagents. I hope this will catches on and help save many tax-payer dollars.

Rubriq tells you what your manuscript is worth.

rubriq-logoPublishing a paper can be a long, tedious and ultimately very frustrating process. Publishers can take weeks if not months to get back to you, to eventualy find out that your work does not fit the journal’s scope or is not quite as polished as it should be to be accepted. The work done before the submission such as optimizing the manuscript’s quality and selecting the right journal is key to speed up publication. There is clearly a need for tools and service like Edanz’s journal advisor to help in the process.

Rubriq, launched in beta phase last week, goes a step further. Rubriq offers a rigorous peer-review service for your biological and medical sciences-related manuscript before their submission to publishers. With the help of peer-scientists, they will judge the manuscript’s quality and check for issues such as plagiarism, conflicts of interest and ethical issues. The paper is then attributed a scorecard that can be used as a pre-publishing metric of quality.

The service is still in the beta phase, with a progressive release of the different services over the course of 2013. As of today, Rubriq accepts manuscripts in the field of immunology, cancer biology, and microbiology and offer scorecard completed by three reviewers in two weeks for $500. The full set of service is expected to go live in March 2013.

I like this initiative. It helps streamline a the pre-submission review of manuscripts, a process that often already exists, but is slow and not always very honest. It help authors get published and should help science get communicated better and more efficiently. One aspect I appreciate in particular is that the scientists reviewing the manuscripts through Rubriq are compensated for their time and effort. This is contrast to the volunteering reviewing work currently done by researchers that directly benefit for-profit publishers. Rubriq will also be an energetic partner for the open access community thinking about how to improve the peer-reviewing process.

Rubriq was founded by entrepreneur Shashi Mudunuri and Keith Collier. Rubriq is a sister company of American Journal Experts which offer related services such manuscript editing and preparation.

Communicating your research online? ImpactStory tells you how well you’re doing.

So you’re now a confirmed research 2.0. When you’re not entering your latest thoughts in your research blog, you’re twitting them. You take part in wikis, your ResearchGate profile is up to date, your papers accessible through some self-archiving repository, you use Mendeley or CiteUlike and started publishing in open access journals.

Congratulations, you’re redefining  the way research is communicated! Surely with so much efforts to communicate your research to a wide audience, your work will have a higher impact. Right? But with only the good old citation number as the standard metric for impact, how do you know how impactful your research is in the research 2.0 era?

Measuring the impact of research is not an easy task. Several parameters should be taken into consideration. Alternative metrics would complement already existing tools. Image taken from http://altmetrics.org/manifesto/

This question was asked and discussed quite a bit over the last few years through articles, conferences and workshops. It became obvious that a new way of measuring research impact should be developed in a way that considers the new online ecosystem for researchers.  This alternative metrics has been named “Article Level Metrics” or “Altmetrics” (alternative metric).

 

ImpactStory is a perfect illustration of the current effort to develop such new methods of evaluating research impact. The service provides a global view of your research impact, combining both traditional and non-traditional metrics.  In addition to standard citation counts, ImpactStory evaluates how many users have bookmarked your articles in online reference managers, how many times your research has been twitted about or was mentioned in posts on blogs or social networks.

This collection of metrics put together in an intelligent fashion has the potential to emerge as a real alternative or complement to journal citation counts and impact factors. This type of metric is also more responsive than traditional journal article citations and could be a good early prediction for the actual citations an article will collect months or years later. It is also a necessary effort, since researchers are asked to share more, to be more open and pedagogical towards the general public, the incentives and rewards must follow. A true metrics of a broader impact must thus be established.

Of course, we are not there yet. And ImpactStory is an experimentation that needs your feedback. As a developing methods and the technologies that come with it, they have attracted criticisms. And indeed altmetrics in their current forms are somewhat flawed. For example the current methods are far from being absolute and quantitative , thus any comparison between articles or researchers in premature.

ImpactStory was developed by two academics studying and promoting alternative metrics for academic research impact. Heather Piwowar a postdoctoral fellow at Duke University and the University of British Columbia studying “research data availability and data reuse“. And Jason Priem, PhD student in information science at University of North Carolina-Chapel Hill. Jason is credited for putting term altmetrics out there and an author of the altmetric manifesto.

Similar initiatives are also out there with a similar mission as ImpactStory. You can check these out:

  • http://article-level-metrics.plos.org/
  • http://altmetric.com/help.php
  • http://sciencecard.org/