It is a curious human activity that at various, but usually specific, times in the year, there is heightened anticipation among certain groups of individuals as they await the announcement of awards, prizes, elections, and other coveted forms of recognition. Inevitably there is joy and disappointment associated with these events. It seems that editors of science journals are not immune to this phenomenon, at least as it affects their publishing duties, thanks to the now well-entrenched (and often dreaded) impact factor. Whether they want it or not, this index will be calculated for their journal, generally in the late spring (northern hemisphere), and it will be scrutinized, usually quite widely, by the global scientific community. This quasi-quantitative measure of putative excellence will then dictate to a very considerable degree, the behavior of prospective authors for the ensuing year. A low value will tend to divert strong papers to competitors, or so it is generally felt, and this in turn tends to lower future impact factors, a self-accelerating process that creates a "rich get richer, poor get poorer" condition, because the basis of the calculation is fundamentally a citations/paper ratio and weaker papers are cited less (or so goes the conventional wisdom).
The underlying problems with citation-based evaluations have been discussed and debated at length, and the pros and cons of these assessments have been well and clearly stated. These range from the idiosyncrasies of scientists and how they make up reference lists to the inherent problem of comparing journals with reviews to those without them (it being widely, and probably correctly, thought that reviews generally garner more citations than research papers). Even more controversial are the uses and abuses these numbers are put to, which often shamefully include career-altering decisions, and this has also been thoroughly aired. However, despite these very real concerns, citation analysis is clearly here to stay. Why, one might ask? Probably because there is an innate human desire, likely enriched in scientists, to quantify that which is inherently unquantifiable, and to even assess that which is inherently unassessable. As a species, we seem obsessed with rating things, and in particular human performance, and science and scientific knowledge are no exceptions. At least in research, it is hard to see the real worth in these exercises and what added value to the contributions (papers) themselves is gained by these ratings, other than perhaps personal aggrandizement, although there are undoubtedly individuals who would debate this.
These remarks, as you might guess, have been prompted by the newest release of impact factors for science journals, including, of course, for the first time, Molecular & Cellular Proteomics (MCP). And indeed if we had scored poorly, they could readily be attributed to sour grapes. However, we achieved a quite creditable score (the MCP impact factor may be obtained by contacting mcp{at}asbmb.org), as did the other two journals primarily focused on proteomics (that received ratings). We view this very pleasing result as an indication that there is strong and growing interest in this exciting field and that it is being well served by three solid journals. Given the flaws in impact factors, this may be considered a self-serving interpretation but it can also be viewed as a challenge to editors and authors alike to focus on solving the real problems that confront the field, and not to dwell on artificial assessments. In the future, it is the scientific record and its contents that will remain, well after any associated impact factors have been long forgotten.
HOME | HELP | FEEDBACK | SUBSCRIPTIONS | ARCHIVE | SEARCH | TABLE OF CONTENTS |
All ASBMB Journals | Journal of Biological Chemistry |
Journal of Lipid Research | Biochemistry and Molecular Biology Education |