Web of Science author impact beamplots

Clarivate has been rolling out new enhancements to the Web of Science (WoS) platform over the last year (new interface is available now!). Among them is a new tool for visualizing the overall publication and citation impact of an author, Author Impact Beamplots (AIBs). You can view a Beamplot from an author’s record in WoS. It will provide a view of an author’s entire publication set (articles and reviews only) that shows each paper’s citation percentile, the author’s average citation percentile per year, and the author’s career citation percentile median. The citation percentile is a normalized percentage of citations a paper receives controlling for subject category, document type, and year of publication. AIBs provide an alternative to single-point performance measures, such as the h-index, by contextualizing the volume and impact of an author’s work over time. Unlike the h-index, the citation percentile metric is normalized, allowing Beamplots to be used for comparisons between authors from different disciplines, and Beamplots do not disadvantage researchers who are early career or who take a break from research (although the most recent 2 years are suppressed to allow publications to accrue meaningful impact). Although AIBs are intended to encourage more nuanced interpretation of performance, they should be interpreted responsibly and in conjunction with other evaluative metrics.

Tracking open access ebook use

The 2019 white paper, “Exploring Open Access Ebook Usage” produced by the Book Industry Study Group, described how difficult it is for individuals and institutions to access complete usage data – views and downloads – for open access ebooks.  The authors recommended the creation of a data trust, a community-developed shared resource that would feed all sources of usage data into a single platform.  The trust would maintain the platform, ensuring that industry stakeholders who collect usage data would keep their data up to date and that individual and institutional consumers of the data adhered to ethical norms for use of metrics and any use of the data licensing. Educopia responded to this recommendation, described in Developing a Pilot Data Trust for Open Access eBook Usage. The pilot data trust is now in its first stages of formation: “Through December 2021, this pilot project will develop and test infrastructure, policy and governance models to support a diverse, global data trust for usage data on open access (OA) monographs.”  You are invited to contribute comments to the discussion forums and working groups on a variety of aspects.  More information about the project is available within the grant narrative and Data Management Plan.

Better research evaluation using SCOPE

Principle statements and best practices about responsible metrics abound, and they are useful benchmarks for dos and don’ts when creating an impact profile. Better decision-making through responsible research evaluation from the INORMS Research Evaluation Working Group goes further, providing a practical guide to help researchers work through the evaluation process:

  • (S)tart with what you value
  • (C)ontext
  • (O)ptions
  • (P)robe
  • (E)valuate

In Introducing SCOPE – a process for evaluating responsibly, two members of the working group describe considerations within each of these steps to help the researcher present a nuanced understanding of their impact by moving through these questions, rather than backwards from any available metric. They remind us that “most of the stuff we want to measure, we can’t actually measure… It’s an art, not a science: a matter of judgement.” By addressing the points in the SCOPE process, the researcher provides their audience with the rationale for their evaluation. This SCOPE one-pager gives a concise summary of considerations.

Calculating price, cost & impact of research articles

Requiem for impact factors and high publication charges is written by a group of biomedical scientists regarding the use of various metrics for publication assessment. It is important for librarians to keep in mind that social scientists usually do not approach research assessment in the same way that biomedical science departments often do. This article is focused on quantitative measures of quality rather than qualitative. The authors explore whether the increasing use of preprints and post publication review dramatically changes the quality of articles and how one might assess preprints which have not been peer reviewed before their publication online. The authors point out that higher cost of author processing charges (APCs) is correlated with higher Journal Impact Factor (JIF), which has apparently led to scholars viewing higher cost of publishing as a proxy for journal quality. Since neither the JIF nor the cost of APCs is an accurate gauge of quality, this is misguided. Higher APCs are also a barrier to authors working in less affluent countries, institutions, and disciplines. The authors examine the idea of downloads as a measure of article importance, but dismiss this given that it makes comparing OA and toll access articles impossible. The authors conclude that although there are many ways to calculate article or author impact, there is not a simply calculated metric to substitute for JIFs and H-indices, which is why they have such persistence.

Standardizing open access ebook use?

OA Book usage data – How close is COUNTER to the other kind? compares COUNTER Code of Practice Release 5 with Google Analytics (GA) as usage metrics. In this study, the number of COUNTER 5 item requests counted were only 58% of GA downloads recorded for the same body of ebooks. GA data shows usage of the books in the sample being dominated by U.S. users, however, the COUNTER 5 data does not show the same pattern. The COUNTER 5 algorithm filters out downloads by users who download more than 40 items per day which filters out use that might be data mining ebooks rather than using them in a traditional way. COUNTER 5 also filters out users who download the same publication more than 10 times in a day. This behavior might be related to educational activities such as presentations directing users to these online resources on a given day. Given the differences in the algorithms, the discrepancies between GA counts and COUNTER 5 counts are not consistent across titles. Sometimes GA stats are higher and for other titles COUNTER 5 stats are higher. Which one is more appropriate depends on what you want to count as usage.

Research impact indicators & metrics news and research

One good tool for tracking RIIM-related news and research in open scholarship is through the Open Access Tracking Project OA.bibliometrics feed. Another more general source is The Bibliomagician curated by an international committee of library and information scientists. News is conveyed through the blog, with recent pieces on Clarivate’s position on research evaluation, challenges of measuring open research data, and speculation about why Elsevier chose to endorse the Leiden Manifesto over DORA, among others. The resources section provides statements of responsible metrics, responsible use guides, discussion lists, conferences and events, readings and more. The group conducts an annual survey about responsible metrics and publishes a set of competencies for bibliometric work.

Introducing a new journal metric

Clarivate Analytics will introduce a new journal metric, the Journal Citation Indicator (JCI), in the 2021 Journal Citation Report (available through the Libraries). The Journal Citation Indicator is a measure of a journal’s relative citation impact that is normalized for disciplinary differences in publication and citation practices, with a score of 1.0 representing the global impact average. A JCI higher than 1.0 represents higher-than-average citation impact; a JCI lower than 1.0 represents lower-than-average citation impact. The JCI is intended to complement the Journal Impact Factor (JIF). It is based on a three-year citation window and will be applied to all journals in the Web of Science Core Collection, including those that do not qualify for a JIF (specifically, journals from the Arts and Humanities Citation Index). Although the aim of the JCI is to make comparisons across disciplines easier and more fair, care should still be taken when interpreting this metric due to the complexities of field normalization. Comparisons between affiliated disciplines will be more reliable than those between unrelated disciplines.