Science of biomedical publication has rapidly changed with the advent of information
technology. Journal Impact Factor (JIF) assesses the performance of a biomedical
journal which monitored periodically by an organization called Thomson Scientific,
USA founded by Eugene Garfield (Garfield, 1955, 2006).
Garfield introduced the term JIF in 1963 (Garfield, 2006).
Citation indicates a study has influenced scientific community. Greater the
frequency of a study being cited, higher will be its influence on the scientific
community (Dong et al., 2005). Citation is the
total number of times a study is used as a reference and cited in the previous
2 years from the given year for which it is calculated (Garfield,
The Hirschs index (h-index) was introduced by Jorge Hirsch as an indicator
for lifetime achievement as measured by the number of received citations (Hirsch,
2005). H-index is an original and simple new measure incorporating both
quantity and visibility of publications (Egghe and Rousseay,
2006). A further advantage seen for the h-index is that the necessary data
for calculation is easy to access in a database without the need for any off-line
data processing (Batista et al., 2006).
The proposed new measure of research performance has immediately provoked reaction
in research community (Ball, 2005; Holden,
2005) and it is an advantageously supplement to assess the outcoming of
scientists when associating with the number of citations and JIF. After the
short time, some researchers have adapted h-index into journals (Braun
et al., 2005), institutions (Bar-Ilan, 2007;
Smith, 2008) and topics which is an effective supplement
to only use JIF to assess journals. This study presents an analysis of the JIF
and h-index of journal utility with a view to ranking reproduction biology literature.
MATERIALS AND METHODS
This study drew on JIF from the 2008, 2004-2008 Journal Citation Reports (JCR) and on h-indices computed automatically from the ISI Web of Science (Thomson Scientific). The item of Reproduction Biology Journals from Subject Category Selection, JCR was retrieved, sorted by the JIF and the Impact Factor and 5 years JIF of all 25 journals were recorded.
Every one of all 25 journals of Reproduction Biology were retrieved sorting
them by the number of Times Cited from Web of Science database, the highest
rank number can be found which is still lower than the corresponding Times Cited
value. This is exactly the h-index of the journal for the given year (Braun
et al., 2005). The h-index of reproduction biology journals was checked
for 9 years (2001-2008) on August 1, 2009 reviewed independently by two investigators.
The relative h-index of reproduction biology journals was adapted from Rousseau
(2006) which calculated from the h-index divided by the numbers of publications
and multiply 100 for easy analysis. The SPSS13.0 software was used to analyze
the linear relationship between the JIF and h-index.
RESULTS AND DISCUSSION
The correspondence between JIF and h-index of 25 reproduction biology journals recognized by ISI Web of Science was assessed. As shown in Table 1, the journals quality was ranked using JIF (2008), 5 years JIF, h-index and relative index. We can found that there was a very high correlation between JIF (2008) and 5 years JIF (2004-2008), the correlation coefficient (r) is 0.98 (p<0.001), suggest that the 2 years JIF (2008) or 5 years JIF (2004-2008) can reflect the quality of journals.
However, there also was a few discrepancy between 2 years JIF (2008) and 5 years JIF (2004-2008). For example, the top 3 journals from JIF (2008) is human reproduction update, fertility and sterility, human reproduction. The top 3 journals from JIF (2008) is the same with 5 years JIF but rank 2 and 3 is different, there are human reproduction, fertility and sterility (Table 1). If the journals quality of reproduction journals was ranked using h-index (2001-2008), the top 3 reproduction journals is biology of reproduction, human reproduction and human reproduction update (Table 1). Using relative h-index, the journals rank is different from the 2 years JIF (2008), 5 years JIF (2004-2008) and h-index (2001-2008). For example, the top 3 journals is human reproduction updat, invertebrate reproduction and development and animal reproduction science.
It was surprised to find the rank of invertebrate reproduction and development, the journal is the last one or the third from the last ranked using JIF (2008) or 5 years JIF (2004-2008). The data showed that there was no correlation (r = 0.19, p = 0.35) between h-index (2001-2008) and relative h-index (2001-2008).
Figure 1 showed the correlation between 2 years or 5 years
JIF and h-index (2001-2008) or relative h-index (2001-2008) of 25 reproduction
biology journals. It can be found that there is a high correlation (r = 0.64,
0.60) between h-index (2001-2008) and 5 years JIF (2004-2008) or 2 years JIF
(2008) (r = 0.64, 0.60; p = 0.001, 0.002) (Fig. 1a, c)
there is a weak correlation between relative h-index (2001-2008) and 5-years
JIF (2004-2008) or 2 years JIF (2008) (r = 0.42, 0.38; p = 0.048, 0.057) (Fig.
JIF is primarily a measure of scientific utility rather than scientific quality.
Originality and merit are the fundamental principles that determine the quality
of a study. Biomedical journals offering priority to them are highly cited and
ranked. The reliability of JIF as an indicator for journal quality is sometimes
recognized limitations (Dellavalle et al., 2007;
Dong et al., 2005; Kurmis,
2003; Moed, 2005). For example, JIF has many defects
such as excessively high JIF on review of the journals, an unfair assessment
toward some journals of slowly developing fields too sensitive few journals
which have much accidental excess of no-cited publications etc (Kurmis,
2003; Garfield, 2006).
|| Journal impact factors and h-index of 25 reproduction biology
|| The relationship between the journal impact factors and the
Other providers offer alternative journal rankings but most deal with a small
subset of the literature in any discipline. H-index has been suggested as an
alternative that is reliable, robust and easily computed (Hirsch,
2005; Van Raan, 2006; Braun
et al., 2005; Rousseau, 2007; Wei
et al., 2009).
Although, the use of different performance measures (JIF vs. h-index) results in somewhat different rank orders of the journals, the correlations between the measures found in this and other studies indicate that the development of the journal h-index and its variants has resulted in hardly any empirical incremental contribution against each other and the JIF. Apparently, these findings from current study indicate a redundancy among the various indicators to measure scientific performance in empirical application.
The various indicators seem to measure similar aspects of scientific performance.
The advantage of the h-index against the JIF (and against most of the h-index
variants) is its manageability: it offers as an evaluative measure for assessing
the research output of scientists (Hirsch, 2005), research
groups (Van Raan, 2006) and journals (Braun
et al., 2005) and due to a simple calculation based on Web of Science
or other database.
Any bibliometric indicator to measure scientific performance should be carefully checked for its validity and its ability to correctly represent scientific quality. There is no doubt that a relative h-index based on current study is imperfect. The relative h-index comes from the h-index divided by numbers of publications. The study showed that relative h-index has a weak correlation with JIF (p≥0.05) (Fig. 1), no correlation with h-index (p>0.05). The reason may is that some reproduction journals include a large of meeting abstract which less is cited by researchers. The number of publications of journals is variable, the relative h-index is also variable. Therefore, the relative h-index may not reflect real publication citation.
As a basic principle, it is always prudent to use several indicators to measure research performance such as JIF and h-index. The study show that the JIF on the one hand and the h-index on the other, measure different things. The h-index is a useful supplementary indicator, enrichment for the bibliometric toolset but it is certainly not suited to substitute advanced indicators (JIF) which have long ago become standard in bibliometric research.