The traditional measure of scholarly impact, “impact factor” of journals is shifting to individual articles, separate from their journals they are published in. This has big implications for how we think about the impact of academic research both within the academy and beyond it.
Prestigious, R1 institutions, often evaluate faculty for tenure and promotion based on how often they publish in “high impact” journals, as measured by something known as Impact Factor (IF). The IF was developed as part of the U.S. National Research Council project 35 years ago to evaluate the improvements that resulted from a billion dollar University Science Development program funded by the National Science Foundation. To find out what a particular journal’s IF is, you can consult this guide. It’s currently administered by Thomas Reuters, and journals often tout their impact factor (citing Thomas Reuters) to attract submissions from academics eager to share that putative prestige.
The Impact Factor have come under scrutiny for a number of reasons, including that the IF rankings of journals have a remarkably high correlation to departments’ ranking, suggesting that the it’s not the journals that are prestigious but the academic departments that house them.
Journals can also boost their IF through various easy-to-manipulate means and dozens of journals have come under attack for such practices. A number of academics have launched a critique of impact factors (pdf) and make a persuasive case about their lack of validity.
There’s another problem with impact factors. A recent analysis by George Lazano, Vincent Larivière and Yves Gingras identifies another, and perhaps larger, problem: since about 1990, the IF has been losing its very meaning.
Lozano points out that impact factors were developed in the early 20th century to help American university libraries with their journal purchasing decisions. Of course, throughout the last century, printed, bound journals were the main way in which scholarly research was distributed. All that’s changing.
With digital means of publication and dissemination, academic research is released from those bound volumes to a many-to-many distribution system. Here is what Lozano and colleagues found in their research on the impact factor in this new environment:
Using a huge dataset of over 29 million papers and 800 million citations, we showed that from 1902 to 1990 the relationship between IF and paper citations had been getting stronger, but as predicted, since 1991 the opposite is true: the variance of papers’ citation rates around their respective journals’ IF has been steadily increasing. Currently, the strength of the relationship between IF and paper citation rate is down to the levels last seen around 1970.
Furthermore, we found that until 1990, of all papers, the proportion of top (i.e., most cited) papers published in the top (i.e., highest IF) journals had been increasing. So, the top journals were becoming the exclusive depositories of the most cited research. However, since 1991 the pattern has been the exact opposite. Among top papers, the proportion NOT published in top journals was decreasing, but now it is increasing. Hence, the best (i.e., most cited) work now comes from increasingly diverse sources, irrespective of the journals’ IFs.
If the pattern continues, the usefulness of the IF will continue to decline, which will have profound implications for science and science publishing. For instance, in their effort to attract high-quality papers, journals might have to shift their attention away from their IFs and instead focus on other issues, such as increasing online availability, decreasing publication costs while improving post-acceptance production assistance, and ensuring a fast, fair and professional review process.
Lozano and colleagues raise interesting issues for us to consider in the new landscape of scholarly communication. If the impact of our research is no longer tied to particular journals, often with very insular, disciplinary-specific concerns, and geared to a narrow audience of specialists, then there are a number of possibilities that open up. As Lozano suggests, we may begin to see journals that increase online availability, lower publication costs, and improve production and peer-review processes.
Whatever happens, the shift of “impact” from a small set of journals to individual articles is an epic shift in scholarly communication.
Pingback: Declaração recomenda eliminar o uso do Fator de Impacto na Avaliação de Pesquisa | Estudos de CTS – Estudos sociais e conceituais de ciência, tecnologia e sociedade
Pingback: Declaración recomienda eliminar el uso del Factor de Impacto en la evaluación de la Investigación | SciELO en Perspectiva
Pingback: Declaration recommends eliminate the use of Impact factor for research evaluation | SciELO in Perspective
Pingback: Declaração recomenda eliminar o uso do Fator de Impacto na Avaliação de Pesquisa | SciELO em Perspectiva