Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Comment
  • Published:

Future impact

Predicting scientific success

Daniel E. Acuna, Stefano Allesina and Konrad P. Kording present a formula to estimate the future h-index of life scientists.

This is a preview of subscription content, access via your institution

Access options

Buy this article

39,95 €

Prices may be subject to local taxes which are calculated during checkout

References

  1. Hirsch, J. E. Proc. Natl Acad. Sci. USA 102, 16569–16572 (2005).

    Article  ADS  CAS  Google Scholar 

  2. Redner, S. J. Stat. Mech. Theory Exp. 3, L03005 (2010).

    Google Scholar 

  3. Peterson, I. ScienceNews 2 December 2005; available at http://go.nature.com/iawd5o.

  4. Alonso, S., Cabrerizo, F. J., Herrera-Viedma, E. & Herrera, F. J. Informetr. 3, 273–289 (2009).

    Article  Google Scholar 

  5. Hirsch, J. E. Proc. Natl Acad. Sci. USA 104, 19193–19198 (2007).

    Article  ADS  CAS  Google Scholar 

  6. Zou, H. & Hastie, T. J. Roy. Stat. Soc. B 67, 301–320 (2005).

    Article  Google Scholar 

  7. Dwan, K. et al. PLoS ONE 3, e3081 (2008).

    Article  ADS  Google Scholar 

  8. Ginther, D. K. et al. Science 333, 1015–1019 (2011).

    Article  ADS  CAS  Google Scholar 

  9. Allesina, S. PLoS ONE 6, e21160 (2011).

    Article  ADS  CAS  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel E. Acuna.

Supplementary information

Rights and permissions

Reprints and permissions

About this article

Cite this article

Acuna, D., Allesina, S. & Kording, K. Predicting scientific success. Nature 489, 201–202 (2012). https://doi.org/10.1038/489201a

Download citation

  • Published:

  • Issue date:

  • DOI: https://doi.org/10.1038/489201a

Comments

Commenting on this article is now closed.

  1. Essentially, this concept operates on the opinions of others, as evidenced by cites, for quality, a rather fungible and difficult to define concept. Beyond the somewhat obvious, that people with good publications will tend to publish good work in the future, and those that publish widely do better still, were not similar derivatives-like approaches used to make quite an impact on the global economy?

  2. Predicting our achievements?

    ?To all the nonsense happening are not just those to blame who started it, but also those who didn?t prevent it ? is one of the key statements of the head teacher Dr. Johann ?Justus? Bökh at the boarding school in the novel ?The flying classroom? from Erich Kästner (Puffin Books, 160 pp. ISBN 0140303111) - a quote that came to my mind when reading about the model of Acuna et al. It very well completes our insights into scientists? performance based on bibliography data. Although possibly intended to shed a critical light on the issue, papers like this will more and more foster the focus on the h-index and on ?academia's obsession with quantity? (Fischer et al., TREE 27, 473-477; 2012). It distracts from developing a common sense on criteria, which might be even more important for future research.

    For example for my field of research &#8211 environmental sciences &#8211 we need criteria to assess research initiatives that aim at global environmental change on different scales, sustainable development as well as implementing measures. This can only be achieved by promoting inter- and transdisciplinary science, integrative work, co-design and co-development. We need to seek for an appropriate new model on this, and the first version won?t be a regression equation.

  3. The idea of prediction of scientific performance is a good one, but using h-index as the measure of performance is not. This indicator is based on an arbitrary and unjustified combination of publication and citation counts (e g Lehmann et al, 2006, Nature, 444, 1003-1004), is heavily biased because of multiple authorship (e g Schreiber, 2008, New Journal of Physics, 10, 040201), and the particular cumulative index is not a proper measure of future performance (e g Hirsch, 2007, PNAS, 104, 19193-19198).

  4. The H-index ? a small number with a big impact. First introduced by Jorge E. Hirsh in 2005, it is a relatively simple way to calculate and measure the impact of a scientist (Hirsch, 2005). It divides opinion. You either love it or hate it. I happen to think the H-index is a superb tool to help assess scientific impact. Of course, people are always favorable towards metrics that make them look good. So let?s get this out into the open now, my H-index is 44 (I have 44 papers with at least 44 citations) and, yes, I?m proud of it! But my love of the H-index stems from a much deeper obsession with citations.

    As an impressionable young graduate student, I saw my PhD supervisor regularly check his citations. Citations to papers means that someone used your work or thought it was relevant to mention in the context of their own work. If a paper was never cited, and perhaps therefore also little read, was it worth doing the research in the first place? I still remember the excitement of the first citation I ever received and I still enjoy seeing new citations roll in.

Search

Quick links

Nature Briefing Careers

Sign up for the Nature Briefing: Careers newsletter — what matters in careers research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: Careers