Monday, December 14, 2009

On the Absurdity of the use of Journal Impact factors as a measure of Individual Academic Excellence

 Recently the University Grants Commision of  India as well as our  University has begun the process to use Impact factor of Journals published in by faculty  as a basis for rewarding academic performance. However the truth is that the impact factor of a Journal, while perfectly reasonable as a way of evaluating Journal performance or rank , is almost completely discredited as a means of evaluating individual performance. To quote the Wikipedia (ImpactFactor)

"The impact factor is often misused to evaluate the importance of an individual publication or evaluate an individual researcher. This does not work well since a small number of publications are cited much more than the majority - for example, about 90% of Nature's 2004 impact factor was based on only a quarter of its publications, and thus the importance of any one publication will be different from, and in most cases less than, the overall number. The impact factor, however, averages over all articles and thus underestimates the citations of the most cited articles while exaggerating the number of citations of the majority of articles. Consequently, the Higher Education Funding Council for England was urged by the House of Commons
Science and Technology Select Committee to remind Research Assessment Exercise panels that they are obliged to assess the quality of the content of individual articles, not the reputation of the journal in which they are published."

Yet the UGC and following it  Indian Universities, including ours,     have  blithely set in motion an elaborate exercise   precisely to use Journal  Impact factors  as a metric for academic performance !  Such a belated  imitation of long passe international fashions is not unknown to our ex-colonial country !  Many issues,  not the least of which is an  implicit acceptance that Indian academics can never hope to achieve globally competitive academic status, are involved in the absurdity of implementing this "reform'' long after the Journal  Impact Factor (JIF)  has been completely  discredited as a measure of individual scientific performance. It is interesting that individual citation measures, for example the number of citations received by a particular paper in the each of the preceding three years,  which is based on the same idea as the JIF applied to Individuals(or any of many others for example , just to cook up a new one I have not seen used before,  the change in the square of the H-index in the previous year ),  though also  imperfect, but still much more reliable for measuring individual performance than Journal Impact factors which only compare journals on the average and have nothing to say on individual merit, are simply not considered either by the UGC or by this University.  Could this be because they do not lend themselves to the misinterpretations pointed out above and below ? 


To appreciate the cogency of these remarks  one must first recall the definition of the impact factor. The Wikipedia defines it thus  :

"In a given year, the impact factor of a journal is the average number of citations to those papers that were published during the two preceding year. For example, the 2003 impact factor of a journal would be calculated as follows:

A = the number of times articles published in 2001 and 2002 were cited by indexed journals during 2003

B = the total number of "citable items" published in 2001 and 2002. ("Citable items" are usually articles, reviews, proceedings, or notes; not editorials or Letters-to-the-Editor.)

2003 impact factor = A/B

(Note that 2003 impact factors are actually published in 2004; it cannot be calculated until all of the 2003 publications had been received by the indexing agency.)"

Now with such a definition the absurdity of using the Impact Factor as a measure of individual performance  becomes easy to demonstrate by quite realistic looking examples.  Imagine that we have an author (say a Physicist for concreteness) who publishes a paper in 2005 in a ``low'' impact factor journal (say 2.0)  which receives   20 citations in 2005, 25 citations in 2006 and 30 citations in 2007. (clearly he will have set his subfield agog by his papers !).   At the same time another author  publishes an article in a ``high''  impact factor journal (say JIF=3 : the actual cutoff suggested for Physics faculty)  as published in 2008 : but this JIF  would be based on the journal's performance in 2005 and 2006 as revealed in citations during 2007  (see definition above).     Then according to the academic evaluation criteria based on  Journal Impact Factor , the academic performance of the faculty member whose articles are cited 20 times or more each year and in fact 30 times in 2007 are judged to be less worthy of recognition and encouragement than the mere fact of publication  in  a "high impact factor'' i.e  JIF=3   journal in 2007  which   is statistically nearly certain  (like most publications in so called high impact factor journals)  to never be cited even 3 times a year (see first quote from Wikipedia)!   This makes it evident that the proposal to implement JIF based assessment of academic performance is actually not based on the identification of excellence but the dressing up of mediocrity on the basis of  a pseudo objective measure actually applicable to the rating of completely different entities namely Journals and not individuals at all.
This ``category mistake"  is aided  somehow by an implicitly  subservient attitude that sees simply being accepted by the established scientific powers by being allowed to publish in their journals as an acceptable("realistic/objective") "consolation  substitute"  for an objective measure of scientific excellence as revealed by consistent citation by one's peers .  The issue of  using actual  citation measures adapted to and designed for the evaluation of individual scientific performance is simply brushed under the carpet because it is implicitly assumed to be too hard to achieve a high score on these. However such half measures can never actually lead to scientific excellence since the first precondition of a dynamic and self confident Science is truthfulness.


 The UGC and university authorities should wake up to the developing absurdity that will soon entrench itself and become established wisdom that will entail another 50 years of academic mediocrity . They should  adopt numerical measures-- if at all they have the ability and self confidence to use numerical measures objectively and not a la Disraeli i.e as the evil third in ``lies , damn lies and statistics"-- that are in line with the best metrics available globally for measuring individual performance, and not further nourish the absurd  laurels for mediocrity  that are the bane of the quest for excellence in the Indian acdemic system.















 
 

No comments:

Post a Comment