Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Daniel E. Acuna is a research associate at the Rehabilitation Institute of Chicago, Illinois 60611, USA, and a research affiliate in biomedical engineering at Northwestern University, Evanston, Illinois 60208, USA.,
Daniel E. Acuna
Stefano Allesina is assistant professor in ecology and evolution and at the Computation Institute at the University of Chicago, Illinois 60637, USA.,
Stefano Allesina
Konrad P. Kording is associate professor of physical medicine and rehabilitation, physiology, and applied mathematics at Northwestern University, and at the Rehabilitation Institute of Chicago.,
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Comments
Commenting on this article is now closed.
Mark Field
Essentially, this concept operates on the opinions of others, as evidenced by cites, for quality, a rather fungible and difficult to define concept. Beyond the somewhat obvious, that people with good publications will tend to publish good work in the future, and those that publish widely do better still, were not similar derivatives-like approaches used to make quite an impact on the global economy?
Ralf Seppelt
Predicting our achievements?
?To all the nonsense happening are not just those to blame who started it, but also those who didn?t prevent it ? is one of the key statements of the head teacher Dr. Johann ?Justus? Bökh at the boarding school in the novel ?The flying classroom? from Erich Kästner (Puffin Books, 160 pp. ISBN 0140303111) - a quote that came to my mind when reading about the model of Acuna et al. It very well completes our insights into scientists? performance based on bibliography data. Although possibly intended to shed a critical light on the issue, papers like this will more and more foster the focus on the h-index and on ?academia's obsession with quantity? (Fischer et al., TREE 27, 473-477; 2012). It distracts from developing a common sense on criteria, which might be even more important for future research.
For example for my field of research – environmental sciences – we need criteria to assess research initiatives that aim at global environmental change on different scales, sustainable development as well as implementing measures. This can only be achieved by promoting inter- and transdisciplinary science, integrative work, co-design and co-development. We need to seek for an appropriate new model on this, and the first version won?t be a regression equation.
Endel Poder
The idea of prediction of scientific performance is a good one, but using h-index as the measure of performance is not. This indicator is based on an arbitrary and unjustified combination of publication and citation counts (e g Lehmann et al, 2006, Nature, 444, 1003-1004), is heavily biased because of multiple authorship (e g Schreiber, 2008, New Journal of Physics, 10, 040201), and the particular cumulative index is not a proper measure of future performance (e g Hirsch, 2007, PNAS, 104, 19193-19198).
Murry Garry
The H-index ? a small number with a big impact. First introduced by Jorge E. Hirsh in 2005, it is a relatively simple way to calculate and measure the impact of a scientist (Hirsch, 2005). It divides opinion. You either love it or hate it. I happen to think the H-index is a superb tool to help assess scientific impact. Of course, people are always favorable towards metrics that make them look good. So let?s get this out into the open now, my H-index is 44 (I have 44 papers with at least 44 citations) and, yes, I?m proud of it! But my love of the H-index stems from a much deeper obsession with citations.
As an impressionable young graduate student, I saw my PhD supervisor regularly check his citations. Citations to papers means that someone used your work or thought it was relevant to mention in the context of their own work. If a paper was never cited, and perhaps therefore also little read, was it worth doing the research in the first place? I still remember the excitement of the first citation I ever received and I still enjoy seeing new citations roll in.
Mark Field
Essentially, this concept operates on the opinions of others, as evidenced by cites, for quality, a rather fungible and difficult to define concept. Beyond the somewhat obvious, that people with good publications will tend to publish good work in the future, and those that publish widely do better still, were not similar derivatives-like approaches used to make quite an impact on the global economy?
Ralf Seppelt
Predicting our achievements?
?To all the nonsense happening are not just those to blame who started it, but also those who didn?t prevent it ? is one of the key statements of the head teacher Dr. Johann ?Justus? Bökh at the boarding school in the novel ?The flying classroom? from Erich Kästner (Puffin Books, 160 pp. ISBN 0140303111) - a quote that came to my mind when reading about the model of Acuna et al. It very well completes our insights into scientists? performance based on bibliography data. Although possibly intended to shed a critical light on the issue, papers like this will more and more foster the focus on the h-index and on ?academia's obsession with quantity? (Fischer et al., TREE 27, 473-477; 2012). It distracts from developing a common sense on criteria, which might be even more important for future research.
For example for my field of research – environmental sciences – we need criteria to assess research initiatives that aim at global environmental change on different scales, sustainable development as well as implementing measures. This can only be achieved by promoting inter- and transdisciplinary science, integrative work, co-design and co-development. We need to seek for an appropriate new model on this, and the first version won?t be a regression equation.
Endel Poder
The idea of prediction of scientific performance is a good one, but using h-index as the measure of performance is not. This indicator is based on an arbitrary and unjustified combination of publication and citation counts (e g Lehmann et al, 2006, Nature, 444, 1003-1004), is heavily biased because of multiple authorship (e g Schreiber, 2008, New Journal of Physics, 10, 040201), and the particular cumulative index is not a proper measure of future performance (e g Hirsch, 2007, PNAS, 104, 19193-19198).
Murry Garry
The H-index ? a small number with a big impact. First introduced by Jorge E. Hirsh in 2005, it is a relatively simple way to calculate and measure the impact of a scientist (Hirsch, 2005). It divides opinion. You either love it or hate it. I happen to think the H-index is a superb tool to help assess scientific impact. Of course, people are always favorable towards metrics that make them look good. So let?s get this out into the open now, my H-index is 44 (I have 44 papers with at least 44 citations) and, yes, I?m proud of it! But my love of the H-index stems from a much deeper obsession with citations.
As an impressionable young graduate student, I saw my PhD supervisor regularly check his citations. Citations to papers means that someone used your work or thought it was relevant to mention in the context of their own work. If a paper was never cited, and perhaps therefore also little read, was it worth doing the research in the first place? I still remember the excitement of the first citation I ever received and I still enjoy seeing new citations roll in.