#HEFCEMetrics: More on Metrics for the Arts and Humanities

Today I’ll participate via Skype at the HEFCE Metrics and the assessment of research quality and impact in the Arts and Humanities workshop, commissioned by the independent review panel. I share below some notes. For previous thoughts on metrics for research assessment, see my 23 June 2014 post.

What metrics?

Traditionally, two main form of metrics have been used to measure the “impact” of academic outputs: usage statistics and citations.

“Usage statistics” usually refers to mainly two things: downloads and page views (they are often much more than that though). These statistics are often sourced from individual platforms through their web logs and Google Analytics. Some of the data platform administrators have collected from web logs and Google Analytics apart from downloads and page views  include indicators of what type of operating systems and devices are being used to access content and landing pages for most popular content. This data is often presented in custom-made reports that collate the different data, and the methods of collection and collation vary from platform to platform and user to user. The methods of collection are not transparent and often not reproducible.

Citations on the other hand can be obtained from proprietary databases like Scopus and Web of Knowledge, or from platforms like PubMed (in the sciences) Google Scholar, and CrossRef. These platforms have traditionally favoured content from the sciences (not the arts and humanities). Part of the reason is that citations are more easily tracked when the content is published with a Digital Object Identifier, a term that remains largely obscure and esoteric to many in the arts and humanities. Citations traditionally take longer to take place, and therefore take longer to collect. Again, the methods for their collection are not always transparent, and the source data is more often than not closed rather than open. Citations privilege more ‘authoritative’ content from publishers that provide and count with the necessary infrastructure, and that has been available for a longer time.

“Altmetrics”?

Altmetrics is “the creation and study of new metrics based on the Social Web for analyzing, and informing scholarship.” (Priem et al 2010). Altmetrics normally employ APIs and algorithms to track and create metrics from the activity on the web (normally social media platforms such as Twitter and Facebook, but also from online reference managers like Mendeley and tracked news sources) around the ‘mentioning’ (i.e. linking) of scholarly content. Scholarly content is recognised by their having an identifier such as a DOI, PubMed ID, ArXiv ID, or Handle.  This means that outputs without these identifiers cannot be tracked and/or measured.  Altmetrics are so far obtained through third-party commercial services such as Altmetric.com, Plu.mx and ImpactStory.

Unlike citations, altmetrics (also known as “alternative metrics” or “article-level metrics” when usage statistics are included too) can be obtained almost immediately, and since in some cases online activity can be hectic the numbers can grow quite quickly. Altmetrics providers do not claim to measure “research quality”, but “attention”; they agree that the metrics alone are not sufficient indicators and that therefore context is always required.  Services like Altmetric, ImpactStory and PlumX have interfaces that collect the tracked activity in one single platform (that can also be linked to with widgets embeddable on other web pages). This means that these platforms also function like search and discovery tools where users can explore the “conversarions” happening around an output on line.

The rise of altmetrics and a focus on their role as a form or even branch of bibliometrics, infometrics, webometrics or scientometrics (Cronin, 2014) has taken place in the historical and techno-sociocultural context of larger transformations in scholarly communications. The San Francisco Declaration on Research Assessment (DORA, 2012) [PDF], for example, was prompted by the participation of altmetrics tools developers, researchers and open access publishers, making the general recommendation of not using journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual researcher’s contributions, or in hiring, promotion, or funding decisions.

The technical and cultural premise of altmetrics services is that if academics are using social media (various web services such as Twitter and Facebook only made possible by APIs) to link to (“mention”) online academic outputs, then a service “tapping” into those APIs would allow users such as authors, publishers, libraries, researchers and the general public to conduct searches across information sources from a single platform (in the form of a Graphical Unit Interface) and obtain results from all of them. Through an algorithm, it is possible to quantify, summarise and visualise the results of those searches.

The prerequisites for altmetrics compose a complex set of cultural and technological factors. Three infrastructural factors are essential:

  1. Unlike traditional usage statistics, altmetrics can only be obtained if the scholarly outputs have been published online with Digital Object Identifiers or other permanent identifiers.
  2. The online platforms that might link to these outputs need to be known, predicted and located by the service providing the metrics.
  3. Communities of users must exist using the social media platforms tracked by altmetrics services linking to these outputs.

The scholarly, institutional, technological, economic and social variables are multiple and platform and culture-dependent, and will vary from discipline to discipline and country to country.

Standards and Best Practices

Michelle Dalmau, Dave Scherer, and Stacy Konkiel led The Digital Library Federation Forum 2013 working session titled Determining Assessment Strategies for Digital Libraries and Institutional Repositories Using Usage Statistics and Altmetrics, and produced a series of recommendations for “developing best practices for assessment of digital content published by libraries”. Dalmau et al emphasised the importance of making data and methods of collection transparent, as well as of including essential context with the metrics.

As open access mandates and the REF make “impact” case studies more of a priority for researchers, publishers and institutions, it is important to insist that any metrics and their analysis, provided by either authors, publishers, libraries or funding bodies, should be openly available “for reuse under as permissive a license as possible” (Dalmau, Scherer and Konkiel).

Arts and Humanities

If altmetrics are to be used in some way for research assessment, the stakeholders involded in arts and humanities scholarly publishing need to understand the technical and cultural prerequisites for altmetrics to work. There are a series of important limitations that justify scepticism towards altmetrics as an objective “impact” assessment method. A bias towards Anglo-american and European sources, as well as for STEM disciplines, casts a shadow on the growth of altmetrics for non-STEM disciplines (Chimes, 2014). A prevalence of academic journals, particularly in the arts and humanities, have yet to have a significant, sustainable online presence, and many still lack DOIs to enable their automated and transparent tracking.

At their best, altmetrics tools are meant to encourage scholarly activity around published papers on line. It can seem, indeed, like a chicken-and-egg situation: without healthy, collegial, reciprocal cultures of scholarly interaction on the web, mentions of scholarly content will not be significant. Simultaneously, if publications do not provide identifiers like DOIs and authors, publishers and/or institutitons do not perceive any value in sharing their content, altmetrics will yet again be less significant. Altmetrics can work as search and discovery tools for both scholarly communities around academic outputs on the web, but they cannot and should not be thought as unquestionable proxies of either “impact” or “quality”. The value of these metrics lies in providing us with indicators of activity– any value obtained from them can only be the result of asking the right questions, providing context and doing the leg work– assessing outputs on their own right and their own context.

Libraries could do more to create awareness of the potential for altmetrics within the arts of humanities. The role of the library through its Institutional Repository (IR) to encourage online mentioning and the development of impact case studies should be readdressed; particularly if ‘Green’ open access is going to be the mandated form of access. Some open access repositories are already using them (City University London’s open access repository has had Altmetric widgets for its items since January 2013); but the institution-wide capabilities of some of the altmetrics services are fairly recent (Altmetric for Institutions was officially launched in June 2014). There is much work to be done, but the opportunity for cultural change that altmetrics can contribute to seems too good to waste.

 

One thought on “#HEFCEMetrics: More on Metrics for the Arts and Humanities

Comments are closed.