New toolkit for existing and start-up open access journals

The Open Access Journals Toolkit is a new resource developed collaboratively by the Open Access Scholarly Publishing Association (OASPA) and the Directory of Open Access Journals (DOAJ) to guide existing and new open access journal publishers in a volatile global scholarly communication landscape. Articles (with references) cover issues of funding, setup, peer review and quality assurance, running a journal in a local or regional language, software and technical infrastructure, persistent identifiers, licensing, recruiting staff and building an editorial board, and much more. The Toolkit is organized into 6 searchable sections:

  • Getting started
  • Running a journal
  • Indexing
  • Staffing
  • Policies
  • Infrastructure.

Checklists, a definition of open access with a grid of different funding models, a glossary, an FAQ and an “About” section round out the Toolkit. It is openly licensed with Creative Commons CC-BY 4.0 and can be downloaded and printed as a whole or as individual articles. Versions in English and French are currently available, with other languages forthcoming. The Toolkit is intentionally designed for accessibility and use by scholarly publishers working in different contexts and regions across the world. Because of its comprehensiveness, it serves those thinking about starting an OA journal, those considering how to better establish their journals, those who are concerned with their journal’s financial sustainability and those with challenges in specific areas. The Toolkit fills a much needed gap in the nuts and bolts of open access journal publishing.

UNESCO launches Toolkit to support Recommendations on Open Science

UNESCO recently released an Open Science Toolkit to aid implementation of the UNESCO Recommendation on Open Science. The Toolkit is a set of guides, policy briefs, factsheets and indexes which will be updated to reflect new developments. They include, for example:

  • Building capacity for open science;
  • Funding open science;
  • Identifying predatory academic journals and conferences; and
  • Checklist for universities on implementing the Recommendation on Open Science.

Documents are published open access in English and French.

Citations: what can we make of them?

In academia, we are taught to cite our sources, and this practice extends to building knowledge through research and development. Source citation is intended to legitimize our methods, analysis and conclusions, as well as to recognize the works of others. But what are we citing, and why?

Nancy K. Herther writes about The Increasingly Complex World of Citations: Changing Methods and Applications in two parts. In Part I, she reviews the history and purpose of citations – from reward to valuation and evaluation – of funding requests, publication and application of results. Citation analysis for these purposes began with articles and expanded to patents in the 1980’s. Other research outputs, such as data and software code, have become cited sources as information on the Internet has exploded. Citations and citation analysis have also become subjects of research, and questions remain about how they function.

Three major bibliographic databases provide indexing and discovery from which citations are cultivated, though each has different coverage and content:

  • Dimensions – covers citations to publications, grants, patents, datasets and policy documents, creating a broader context of networked research information;
  • Scopus – mostly peer-reviewed journals from 11,678 publishers in life sciences, social sciences, physical sciences and health sciences;
  • Web of Science – basis for the original citation index, includes regional, disciplinary, data and patent citations.

Herther outlines some of the major, current issues with citations and areas for further research:

  • Citation & retraction – a mixed history of noting citations of works that have been retracted;
  • Citation mapping & future research agendas – mapping and visualization are promising new fields;
  • Big data research – citation & co-citation analysis to examine the relatedness of core papers;
  • Citation bias – questionable research? scientific misconduct? exclusion based on gender, geography, affiliation, etc.?;
  • Linking productivity to creation of the “citation elite” – a contributor’s number of citations can skew times cited, especially in times of massive (Covid) research outputs;
  • Examining the value of the H-index – changing authorship patterns point to value of fractional allocation measures.

Part I concludes with an interview with University of Notre Dame Philosopher Hannah Rubin about structural sources of citation gaps, feedback loops and persistent inequities.

In Part II, Herther interviews physicist, science historian and co-scientometrics founder Henry Small about changing trends in scientific citation. Small applauds the movement away from using citation and usage numbers as proxies for quality and hails the development of tools that build “citation context” to tell us why citations are used in particular circumstances.

One of these tools is scite, and Herther also interviews its founder, Josh Nicholson. Scite partners with 24 publishers to index 3 million+ articles and 1.5 billion citations. They discuss the role of burgeoning pre-prints and publication text analysis to determine the purpose of the citation – from “We could not reproduce this work” to “We find a smaller effect.” These interviews give us a sense of where citations have come from and how better analysis and research can serve more equitable and informed research communities going forward.

Creating open scholarship by teaching and learning with open scholarship, in community

The authors of “Toward a culture of open scholarship: the role of pedagogical communities” note that as the open scholarship movement gains momentum, its goals of social justice, research quality and inclusive research culture are further advanced by training scholars in the practices of study preregistration, data sharing, replication studies and open access publishing. They argue that “open scholarship is incomplete without open educational practices.”

Integrating these practices throughout higher education curriculum is better achieved with pedagogical communities. They name several –  Open Scholarship Knowledge Base (OSKB), Principles and Practices of Open Research (PaPOR TraIL), Reproducibility for Everyone (R4E) – among others, but they elaborate on the Framework for Open and Reproducible Research Training (FORRT). FORRT includes 12 initiatives to date, including a glossary of open scholarship terms, summaries of open and reproducible science literature, and lesson plans. These pedagogical communities foster participation and collaboration, thus driving a grass roots movement for open scholarship to generate knowledge as a public good for all of humanity.

Claim and share your research contributor roles with Rescognito

Rescognito is a new, free and open system with a mission to expand researcher recognition. An individual researcher identity is verified by an ORCID ID and a scholarly work (article/preprint, data set, software, protocol, etc.) must have a digital object identifier (DOI). Based on verification of these two persistent identifiers, a researcher/scholar/contributor may claim up to 5 different roles they fulfilled in the scholarship from the CRediT Taxonomy of 14 potential types (conceptualization, data curation, formal analysis, funding acquisition, software, validation, etc.). A researcher may also recognize work performed by others. With the researcher’s permission, these contributor roles can be transparently shared to their ORCID profile. Publishers may collaborate with Rescognito to implement the CRediT Taxonomy and data checklists within their publication workflows. As a standards-based (CRediT, DOI, ORCID, ROR), transparent, researcher-driven tool that plugs into the open scholarship ecosystem, Rescognito promulgates the various facets of research and who performs them.

Keeping the researcher at the center of data control and quality: a review of the ORCID Trust Program

ORCID established its Trust Program in 2016, and this blog post celebrates its fifth anniversary. The ORCID organization, of which UMass Amherst is a member, has a mission “of enabling transparent and trustworthy connections between researchers, their contributions, and their affiliations” and “a vision of a world where all who participate in research, scholarship, and innovation are uniquely identified and connected to their contributions across disciplines, borders, and time.” These aspirations are made real on the basis of trust built on individual researcher control, accountability and strict organization provenance tracking. Researchers/scholars/contributors control who can write to, read from and edit the data associated with their ORCID profile, for how long they can do it, with verification of the source organizations.

With ORCID’s growth have come attempts to misuse the connections and tools it provides. These include automated search engine optimization and spam generators that could potentially undermine trust. ORCID has put in place brakes that halt these schemes. Another less common problem is academic fraud by which people misrepresent their works and affiliations. This is a violation of the terms of use and these records can be challenged through the dispute procedures. ORCID is not an arbiter of what data is associated with a contributor profile, but it does provide authenticated workflows with registered data providers. A researcher can determine for themselves the authenticity of the data and the provenance of the data provider before deciding whether or not to grant permission for data exchange.

ORCID is a non-profit, member-governed organization that provides an open platform for disambiguated, unique and persistent researcher/scholar/contributor name identifier and profile information. And ORCID ID is a free service to individuals. More information about ORCID is available from this guide.

Policy Commons: bringing the works of research centers and think tanks to light

At last week’s NISO Open Research virtual conference, Toby Green of CoherentDigital.net described how the research of thousands of governmental agencies, non-governmental organizations, research centers and think tanks is published in works without metadata, unique identifiers or other standards that that would enable it to be found in library catalogs or major search tools that many researchers use. Consequently, a vast body of research produced by experts, supported by data and reviewed, is lost to communities that could benefit from it. It is “on the dark side of the moon.”

Policy Commons is an initiative to work with these organizations to ingest, describe and index their research in ways that the public can find and use it. The database includes entries with links the text of over 2.5 million papers from thousands of organizations. Individuals cans register to search for and access content for free up to 25 searches per month; membership gives unlimited searching. Fees are collected from research groups and institutions for higher capacity to harvest, upload and organize their works. Any registered user can upload their own content.

The vision and goals of Policy Commons are worthy and its coverage is broad. It includes 317 works published in Mali and 1,801 from the Seychelles, for example. A user has multiple options for finding content – browsing by topic, identifying organizations, viewing publications or tables – and then applying filters for language, publication type, publisher type, year published, publisher country, and more. You can also conduct a simple or advanced search. The advanced search starts with title, summary or full-text search, after which you can limit by any or a combination of facets. I found 6 reports with “voting” in the title published in Dutch between 2010 -2021. To fully integrate with other catalogs and search tools, Policy Commons could make a broader contribution to knowledge management by applying and employing standard identifiers, such as those for digital objects, organizations and authors. With plenty of room for further development, it currently serves a need for high quality information retrieval.

Use Google Scholar citations to update your ORCiD profile

There are many and ever growing reasons to create, populate and maintain an ORCiD unique researcher and contributor identifier and profile, but a barrier can be spending the time to populate your profile with your complete works, especially if they were created or published before the publisher was integrated with ORCiD. ORCiD is integrated with a number of databases, such MLA International Bibliography, Scopus and The Lens, from which you can import citations or patents. Still you may not find as many of your works in these databases as are cited by Google Scholar. Though Google Scholar is not yet integrated with ORCiD, you can export citations from your Google Scholar profile to BibTeX, and then import that file to your ORCiD profile. This blog post provides step-by-step instructions with screen shots for using this method to build out your ORCiD profile.

The Lens: a new public good platform covering global patents and scholarly knowledge

With the dangers of proprietary systems and surveillance technologies as a backdrop, the SPARC Impact Story on The Lens caught my attention. The Lens is an online, open, public good platform developed over 20 years by the non-profit Australian organization Cambia which is “committed to its mission of making knowledge open, meaningful, useful and accessible” to individuals and institutions alike. “​​The Lens serves global patent and scholarly knowledge as a public resource to make science- and technology-enabled problem solving more effective, efficient and inclusive.” It ingests and normalizes metadata from 10 partner organizations, including CrossRef, ORCiD, PubMed, USPTO and WIPO, to form a database of over 225 million scholarly works, 127 million global patent records and 370 million patent sequences. The platform has a strong privacy policy summed up by “Your use of the Lens is your business, not ours.” In addition to its global coverage, it sports a nifty search and filter interface. It’s designed to spur collaboration among research institutions, policy makers, researchers, funders, prospective students, patent offices and publishers, and you can examine use cases for all of them.

Web of Science author impact beamplots

Clarivate has been rolling out new enhancements to the Web of Science (WoS) platform over the last year (new interface is available now!). Among them is a new tool for visualizing the overall publication and citation impact of an author, Author Impact Beamplots (AIBs). You can view a Beamplot from an author’s record in WoS. It will provide a view of an author’s entire publication set (articles and reviews only) that shows each paper’s citation percentile, the author’s average citation percentile per year, and the author’s career citation percentile median. The citation percentile is a normalized percentage of citations a paper receives controlling for subject category, document type, and year of publication. AIBs provide an alternative to single-point performance measures, such as the h-index, by contextualizing the volume and impact of an author’s work over time. Unlike the h-index, the citation percentile metric is normalized, allowing Beamplots to be used for comparisons between authors from different disciplines, and Beamplots do not disadvantage researchers who are early career or who take a break from research (although the most recent 2 years are suppressed to allow publications to accrue meaningful impact). Although AIBs are intended to encourage more nuanced interpretation of performance, they should be interpreted responsibly and in conjunction with other evaluative metrics.