Ethical Credit Allocation in Science: The Case for the s-Index and Research Leadership Share

Abstract

The h-index has become a widely used metric in academic evaluation, influencing hiring, promotion, and funding decisions across disciplines. Its appeal lies in its simplicity, combining productivity and citation impact into a single number. However, the h-index does not account for differences in author contribution, granting full citation credit to all coauthors regardless of their role. This practice inflates credits and distorts—sometimes dramatically—researchers’ rankings, particularly in fields with large collaborative teams. While many variants of the h-index have been proposed to address this issue, none have achieved widespread adoption or integration into platforms like Google Scholar. In this study, we examine h-index data from 20 researchers across two contrasting fields and evaluate four modified indices that adjust for coauthorship. Among these, the s-index—a modified harmonic credit-based model—emerges as a promising companion to the h-index. The s-index is conceptually straightforward, computationally simple, and adaptable to both contribution-based and alphabetical author listings. It acknowledges the added value of each coauthor while distributing credit proportionally to their contribution. Our findings show that the s-index functions as a field-neutral credit equalizer, preserving simplicity while improving fairness. We propose that Google Scholar display the s-index alongside the h-index, enabling the calculation of a Research Leadership Share (RLS), defined as the ratio of s-index to h-index. A lower RLS would reflect greater coauthorship dilution, helping to discourage token authorship and encourage equitable credit practices. By aligning credit allocation with ethical authorship norms, the s-index supports a more just and transparent system for evaluating scholarly contributions.

Share and Cite:

Supernak, J. (2025) Ethical Credit Allocation in Science: The Case for the s-Index and Research Leadership Share. Advances in Applied Sociology, 15, 808-829. doi: 10.4236/aasoci.2025.159047.

1. Introduction

Since its introduction in Hirsch (2005), the h-index has become one of the most influential metrics in academic evaluation. It combines productivity and citation impact into a single number, influencing researcher rankings, hiring, promotions, and funding decisions worldwide. Entire researcher rankings are now built on this index, such as those maintained by Google Scholar, giving it both quantitative authority and normative power.

Despite its widespread adoption, the h-index has serious ethical and methodological shortcomings—most notably, its inability to distinguish between genuine intellectual contribution and mere co-authorship. With the number of co-authors per paper rising—especially in collaborative sciences—researchers can accumulate high h-indices with limited involvement in actual research. This co-authorship distorts the meaning of scholarly impact, creates perverse incentives for honorary or marginal authorship, and undermines the fairness of credit allocation in academia.

The h-index assumes equal credit for all co-authors, regardless of position or contribution. While this simplification may have once been practical, it is now incompatible with the diverse and often hierarchical nature of modern research teams. Although many h-index variants have been proposed to resolve the coauthorship-related problems, they often compromise usability, computational simplicity, or cross-disciplinary fairness.

In this paper, we critically examine the shortcomings of the h-index and analyze four h-index variants searching for a variant that is fair to all co-authors of the publications, is conceptually sound and easy to calculate and implement. Also one that is applicable to the most common contribution-based, non-alphabetic listing on the byline and also to fields where alphabetic listing is a norm. Another quality expected from that new variant is its ability to offer an equalizer for research fields that typically do not generate large numbers of citations and—consequently—high values of the h-factor but at the same time do not need to share publication credits with many coauthors. Although a difficult expectation from any attempt of this kind, the new index should have reasonable applicability across a variety of research models from a single authorship to mega-teams. Another goal of this paper is a search for a fair index that would discourage problematic or honorary authorship in contrast to the h-index that directly benefits from such practices. We will aim at critically evaluating real records of various researchers using the publicly available Google Scholar data to better understand the magnitude of the bias currently existing under the h-index statistics, particularly the apparent lack of fairness in world rankings based on that index. The null hypothesis Ho is that a researchers ranking is not significantly different when calculated using the h-index versus an authorship-adjusted index.

2. Literature Review

Since its introduction (Hirsch, 2005), the h-index has emerged as one of the most widely adopted metrics in academic evaluation, influencing hiring, promotions, and global rankings. Its intuitive design—combining productivity and impact—has led to its integration into platforms such as Google Scholar and Scopus.

However, the h-index has been criticized for failing to account for co-authorship patterns and credit distribution. As Waltman (2016) notes in his comprehensive review, the metric’s simplicity comes at the cost of ignoring positional authorship and field-specific norms. As collaboration becomes more widespread—especially in mega-team science—the h-index increasingly favors individuals who appear frequently on author lists, regardless of their role. Ioannidis, Klavans, & Boyack (2018) have documented the rise of “hyper-prolific” authors, some of whom appear “to publish a paper every few days” often due to consortia-based authorship practices.

A prominent example is the Atlas Consortium at CERN, which lists 2850 authors on each of its 235 publications (Leeming, 2019). Because of the ground-breaking nature of CERN publications, they often bring a large number of citations, some even exceeding 10,000 (Aad et al., 2012). If an ATLAS publication is cited 1000 times, each of the 2850 authors receives 1000 citations to contribute to their h-index core. Thus, that single mega-physics publication would bring 2,850,000 citations to the Google Scholar database, a stark contrast with an area of philosophy where an exceptionally successful solo author of a publication with 1000 citations brings just the unchanged 1000 total citations to the Google database. The big question now is: Is this situation fair? How to justify the byline of 2850 coauthors in sub-particle physics when much smaller teams publish similarly complex and often groundbreaking articles in Science or Nature? In the 45 consecutive full-scale articles published in Science, only one exceeded 30 authors, with a median of 11 authors. Some scientists in large collaborations—feeling uncomfortable about “automatically” receiving full publication credits as consortium members—have advocated distinguishing “named” authorship from “consortium-based” authorship, and even opting to exclude consortium papers from their CVs (Ioannidis, Klavans, & Boyack, 2018). Yet platforms such as Google Scholar do not accommodate this distinction, reinforcing citation inflation.

Research has shown that large teams produce publications that attract more citations than publications produced by single authors or small teams (Ronda-Pupo & Katz, 2021). The rapid accumulation of citations by some early-career scientists has raised concerns, with self-citation identified as a significant contributing factor (Soliman, 2025).

Additional factor that seems to grow over time is the issue of coauthorship integrity. A survey indicated that nearly 70% of European and 55% of American researchers witnessed instances of problematic authorship (Chawla, 2023). In some cases, outright fraudulent publications and citations are being offered for purchase, revealing an alarming trend (Wilcox, 2024).

While much debate surrounds co-authorship inflation and the related citation surge, the critical question really is: why do such practices persist and seem to grow? The short answer is—they pay off. They provide direct advantage for one’s h-index scores and rankings that are displayed on the Google Scholar for anybody to see. More importantly, the h-index is used in hiring, promotion and grant allocation—without necessarily delving into the way that Google Scholar score was actually earned: by research leadership or merely marginal contributions.

The problem of h-index bias resulting from the growing co-authorship was already recognized by Hirsch when he first introduced the h-index: dividing the number of citations by the number of coauthors appeared more fair than assigning all credits to all authors (Hirsch, 2005). While this model would be an acceptable default solution for fields using intentionally alphabetic order of authors, the vast majority of fields use contribution-based authors listing on publication bylines. Accordingly, some credit distribution rules needed to be established and followed to account for both the number of authors and also for the author’s byline position or some additional considerations.

Seemingly countless credit distribution models have been proposed to remedy these imbalances, offering a wide variety of ideas along fractional, geometric, arithmetic, harmonic and special credit allocation systems (Jin et al., 2007; Burrell, 2007; Sidiropoulos, Katsaros & Manolopoulos, 2007; Schreiber, 2008; Egghe, 2008; Schreiber, 2009; Zhang, 2009; Alonzo et al., 2010; Egghe, 2011; Galam, 2011; Liu & Fang, 2012). Bornmann et al. (2011) offered systematic assessments of as many as 37 h-index variants. Generally, the proposed variants were to supplement rather than replace the popular h-index. This applies also to Hirsch’s own variants intended to supplement the basic h-index with additional indices that addressed the issue of multiple authorship accounting (Hirsch, 2010) or the research leadership issue (Hirsch, 2019).

Because of their intuitively logical concept, the harmonic credit distribution models deserve special attention. Hodge & Greenberg (1981) first introduced the idea of harmonic credit allocation—assigning credit inversely proportional to an author’s position on the byline. The model had a normalized version where the sum of all credits amounts to 100%. For that reason the authors warned the primary researchers not to expand their teams beyond the only indispensable coauthors as that would reduce too much the credit for the lead author. This normalized harmonic model gained renewed attention in empirical validations by (Hagen, 2008, 2010; Donner, 2020) showing strong alignment with perceived contribution in collaborative works.

The majority of citation allocation models stick with the fundamental principle that the sum of all coauthor contributions amounts to 1, or 100%. While “pure” in its intended form (a publication of a single author counts as 1—why should any coauthored publication count for more than 1?), it represents a truly dramatic departure from the current accounting system under the h-index regime: If a paper has 1000 coauthors and 1000 citations, each of those coauthors receives 1000 citations to be used in calculation of their h-index. In that case—not entirely theoretical as some publications have even 3000 coauthors—a single citation is counted as not once—it is counted 1000 times! Clearly, such generous credit allocation does not look fair but sticking with the “pure” 100% total credit has also its problems. It assumes that scientific collaboration is merely division of labor among coauthors who otherwise could be replaced by a single research leader who logically becomes a single author. In today’s interdisciplinary research this is not even feasible. It can be argued that additional authors bring additional expertise that is indispensable in interdisciplinary research and create added value that deserve recognition beyond the original 100%. Without necessarily using the “value-added” argument, some authors proposed models that do not comply with the 100% sum rule in their credit allocation models, for example, Berker in his “golden-rate” geometric credit allocation model (Berker, 2018). Tscharntke et al. (2007) also broke with that rule proposing using the raw harmonic weights 1/i although only for the first 10 authors; the coauthors beyond the top ten on the byline would receive an arbitrary 5% that stays the same for any number of additional authors.

Some h-index variant proposals focus on prioritizing scientific leadership, seniority and author’s position when judging their publication contribution. This way, for example, the last authors may receive a special credit (Crispo, 2015), a paper may be included only if the author’s h-index is greater or equal to those of all coauthors (Hirsch, 2010) or is counted only for the alpha-author with the highest h-index among cauthors (Hirsch, 2019).

Despite dozens of proposals of the—very necessary—supplementation of the h-index by its variant, none of the proposed variants has been universally accepted and adopted for a routine display. Diversity of research fields, different collaboration and authorship customs and procedures could be the reason for that. Another reason may be linked to the often tedious procedures of citation re-adjustment resulting from the complicated models and formulas that also try to account for a variety of special cases and exceptions of any field. Therefore, despite the general consensus that the h-index needs to be permanently supplemented by a routine variant on research records—the typical Google Scholar record only lists h-index and i10 scores without addressing the authorship and citation inflation problem.

This paper reports an effort to find a model that: 1) meets the general ethical considerations in a multi-author setting; 2) builds on a simple extension of the h-index using its fundamental combination of (adjusted) productivity and impact; 3) has intuitive logic of the fair allocation of credits and citations; and 4) utilizes the value-added principle resulting from collaborations. Rather than using strictly theoretical considerations, empirical data from the Google Scholar published records are used in the quest for such an additional index.

3. Methods

The aim of this study is to assess the fairness and diagnostic capability of various h-index alternatives in capturing individual scholarly contributions, particularly in the context of multi-authored research. Two contrasting areas of research are analyzed in this study: Philosophy represents an area with a low number of coauthors—with single authorship quite common—whereas Biology is an area where research teams work on a single project, resulting in a number of co-authors on the publication byline. For both areas, a sample of ten researchers was selected in February 2025 from the Google Scholar records following random sampling rules—looking additionally at h-index scores of similar range between 5 and 95, and their reasonable numerical match. Table 1 shows the result of that sampling after organizing the h-index scores in increasing order.

Table 1. h-index scores of researchers under study.

Author number

1

2

3

4

5

6

7

8

9

10

h-index Philosophy

10

25

26

32

33

43

44

51

74

84

h-index Biology

8

17

25

32

41

45

59

63

75

87

Since the h-index does not distinguish between solo and multiple authorship credit allocation, four variants of the h-index were examined for all 20 researchers selected for this study. All Google Scholar citation records for those researchers were used to manually recalculate their respective shares of listed citations according to a given modification model, a tedious but necessary procedure. After adjusting citation counts from all publications for each model, a procedure similar to establishing the h-index was used to calculate—for each of the 20 researchers—a new h' value for each model defined as the maximum number of h' publications that accumulated at least h' citations.

Looking for the best possible h'-index, four h-index variants were analyzed: a strictly fractional credit allocation scheme, two versions of geometric allocation and a modified harmonic allocation model. Based on results of previous studies, no arithmetic or irregular credit distribution model was used as they did not appear as competitive enough conceptually or vis a vis empirical data (Hagen, 2008). The first three models consistently followed the assumption that a scientific publication should count as 1 (or 100%). This is a commonly followed assumption in credit allocation literature. However, it is good to realize that such an assumption represents a dramatic departure from the current accounting system for the h-index where the sum of credits is not 1 but N, which makes a huge difference when N number of co-authors is substantial. Also, the strictly 100% total sum principle does not account for the fact that legitimate coauthors bring an “added value” to the publication that goes beyond any effort of the first author. For this reason the fourth model analyzed departs from the 100% sum of shares requirement.

The first h-index variant analyzed is the strictly fractional credit distribution model: Instead of using Cj number of citations of a given author’s jth publication, Cj/N numbers were used instead—implying that all (1, 2, 3, …, i, …, N) co-authors contributed equally to the publication and deserve equal credit as a result. Such a model would be a logical default solution for a team that—for whatever reason—needs to use alphabetical listing on the publication byline.

The second h-index variant is a specific version of a geometric model that takes into consideration not only the N, the total number of coauthors on the given publication byline but also i, the position on that list occupied by the author under study. The logical—and common—explanation of a geometric model is that the co-authors’ contributions to the publication were gradually smaller and smaller when going down the byline. It is assumed that the listing of the authors on the byline factually reflects their contribution to the publication independently of the method by which the coauthors decided on that particular order. In this study, no special consideration was given to the cases where the last co-author was intentionally listed as last despite their more significant contribution efforts that should have placed them higher on the byline. The reasoning behind that assumption is that if a mentor or corresponding author indeed deserved a better placement, they should have been listed where they truly deserved. Explanation of all authors’ factual contributions is now a required condition for publication acceptance in many journals. (My own inspection of the CRediT-based explanations of the coauthors’ role on over fifty 2025 Science articles did not raise my concern that any position of a coauthor on the byline was not justified).

In his particular version of the geometric model, an arbitrary assumption was made that the credit share for the ith author on the byline is 20% higher than the share for the (i + 1)th author. The shares Si were calculated in such a way as to assure that the sum of all shares Si accounts for strictly 1 (or 100%).

The third h-index variant is another version of the geometric credit allocation model. It is a classical model where the author’s ith share is consistently 100% larger than the author’s (1 + i)th share. As in two previous models, the sum of all shares Si at 100% was assured. That third model has a steep decline of contribution shares moving along the byline and assures a significant share for the first author on the list. For example, if there were 100 coauthors on the publication, the lead author would still receive 50% of the publication credit at the expense of the authors down the byline who would receive some marginal—if any credit at all. Coincidentally (I read that publication after my decision about the four models), a “golden ratio” of 1.618 used in the geometric model proposed by Berker (2018) falls just in the middle of the two geometric models that use ratios of 1.2 and 2.0, respectively, and findings related to the second and third variants could be an additional commentary to that “golden ratio” model.

The fourth and final h-index variant is a modified harmonic credit allocation model. The classical version of that model was considered by other authors in the past with good agreement with empirical data (Hagen, 2010; Donner, 2020). That classical, normalized version complies with the sum of shares being 1 (or 100%). This is assured by the share Si allocation formula for the ith author as in Equation (3.1):

S i = ( 1/i )/ ( 1+1/2 +1/3 ++1/N ) (3.1)

There are two problems with this classical model: computational and conceptual. The first one is related to the fact that for each new publication analyzed, the harmonic sum in the denominator needs to be recalculated again and again as N changes from one publication to the next. The second problem is the absence of the “value added” by respective coauthors beyond the sum being only 1, an issue discussed previously. Therefore a much simpler formula was used in the fourth model, Equation (3.2):

S i =1/i (3.2)

indicating that the value added by the ith author is 1/i independent of the value added by authors (i + 1), (i + 2), etc.—if they exist. Note that the shape of the decline (Equation (3.3))

S i / S ( i+1 ) = ( 1/i )/ ( 1/ i+1 ) (3.3)

is unaffected by the absence of N in the Si = 1/i simple modified harmonic share formula. The absence of N in the allocation formula does not ignore the number of coauthors entirely. If the coauthor is #20 on the byline, it means that the publication has at least 20 coauthors and contribution shares of additional coauthors (if any) would be less than 1/20 = 5%. Here the 1/i decline is continued independently of N, unlike in (Tscharntke et al., 2007) where the share of 5% was kept unchanged for all coauthors beyond number 10 on the byline.

Table 2 illustrates how a hypothetical 100 citations being distributed among arbitrary five co-authors of a publication under six credit distribution models (values are rounded down to the full citation).

Table 2. Citation share distribution under six allocation models.

Model

Formula

Citation share of the ith author

SUM

i = 1

i = 2

i = 3

i = 4

i = 5

1

Fractional

Si = 1/N

20

20

20

20

20

100

2

Geometric A

Si = 1.2S(i+1)

27

23

19

16

13

98

3

Geometric B

Si = 2.0S(i+1)

51

25

13

6

3

98

4a

Modified harmonic

Si = 1/i

100

50

33

25

20

228

4b

Harmonically

modified fractional

Si = HS/N

45

45

45

45

45

225

5

Current full credit

Si = 1

100

100

100

100

100

500

where: harmonic sum HS = (1 + 1/2 + 1/3 + … + 1/N).

Table 2 shows that the modified harmonic credit distribution (Model 4a) appears to be a suitable compromise between the current full credit offered to all coauthors and any model that requires the sum of all shares to be 1. It can be noted that while strict fractional distribution appears unfair to the authors at the beginning of the byline, the classical geometric distribution (Model 3) appears “stingy” to authors at the end of byline. Since the modified harmonic model relaxes the condition that the sum of all shares amounts to 100%, a similar adjustment is needed for the publications that—for whatever reason—use a strictly alphabetic list of coauthors. My random Google Scholar sampling of 100 researchers found that only 5 of them use intentionally alphabetical listings. This seems to be in agreement with the 2012 study that showed that less than 4% of publications use intentional alphabetical listing, a drop from 9% in 1981 (Waltman, 2012). Inspection of all Science articles during the January-May 2025 period brings not a single case of intentionally alphabetical coauthor listing. All articles have multiple authorship with a median of 10 authors. However, some consortia—notably those associated with CERN subatomic research with a large number of authors—do use an alphabetical system of listing. In order to allot credits to those authors consistently with the modified harmonic 1/i rule, a harmonic share multiplier (Equation (3.4))

( 1+1/2 +1/3 ++1/N ) (3.4)

needs to be applied in order to have the same sum of citation counts for both alphabetic and non-alphabetic systems for the same number of authors listed on the byline. In the case of 5 co-authors, (Table 2) that harmonic sum multiplier is 2.283 which increases the individual citation allocations from 20 (Model 1) to 45 per author (Model 4b). Such an exercise assures that the sums are the same; the discrepancy 228 versus 225 in the last column of Table 2 is caused by rounding errors only.

4. Results

4.1. Finding Equivalent Values of Four H-Index Variants for 20 Researchers under Study

After manual re-calculation of the original h-index values posted under the Google Scholar for the 20 scientists previously randomly selected for this study, four scores of h' modifications of the original h-index were determined. Table 3 shows those h' values for 10 researchers in the Philosophy area whereas Table 4 shows the same for the 10 researchers in the Biology area.

Table 3. Four h-index score modifications for ten researchers in the philosophy area.

Researcher #

h-index

MODEL

Fractional

Geometric 1

Geometric 2

Modified Harmonic

h 1

h 2

h 3

h 4

1

10

10

10

10

10

2

25

25

25

25

25

3

26

26

26

26

26

4

32

28

27

26

28

5

33

31

31

31

33

6

44

39

39

39

43

7

45

44

44

44

44

8

51

49

49

49

51

9

76

72

70

67

74

10

85

83

83

83

84

Table 4. Four h-index score modifications for ten researchers in the biology area.

Researcher #

h-index

MODEL

Fractional

Geometric 1

Geometric 2

Modified Harmonic

1

8

4

4

4

7

2

17

7

7

6

8

3

25

15

16

15

20

4

32

9

10

7

11

5

41

19

16

12

21

6

45

22

21

18

23

7

59

18

22

21

36

8

63

30

30

24

36

9

75

27

25

19

31

10

87

27

25

17

38

Visual inspection shows a major difference between the Philosophy and Biology areas. The Philosophy researchers’ scores for any of the four h-index variants h' are only slightly (if at all) lower than the original h-index scores. In contrast, for the Biology researchers, all four of their h-index variants’ h' scores show major reductions from the original h-index scores. The most drastic reduction happens in the case of the h 3 index representing the classical geometric distribution of citation credits: this model offers little reward to authors placed far from the lead author on the byline. In contrast, the modified harmonic model ( h 4 ) offers the smallest reduction from the original h-index score.

Differences among individual Biology researchers are also noticeable. For example, all h' scores of Researcher # 4 are lowered more drastically than those of Researcher #3—because the frequent low byline positions of Researcher 4 reduce the scope of h'-core of relevant publications from the original h-core that was much larger.

4.2. Regression and ANOVA Analysis

In order to investigate those differences closer, the scores of all four h-index alternatives are regressed against the original h-index scores separately for the researchers in the Philosophy and Biology areas. The results are presented in Equations (4.1) through (4.8) below:

1) Philosophy area

h 1 versus h h 1 =0.405+0.963h R 2 =0.995 (4.1)

h 2 versus h h 2 =0.212+0.951h R 2 =0.992 (4.2)

h 3 versus h h 3 =0.175+0.933h R 2 =0.986 (4.3)

h 4 versus h h 4 =0.286+0.986h R 2 =0.997 (4.4)

2) Biology area

h 1 versus h h 1 =3.577+0.315h R 2 =0.804 (4.5)

h 2 versus h h 2 =4.206+0.296h R 2 =0.795 (4.6)

h 3 versus h h 3 =4.939+0.207h R 2 =0.610 (4.7)

h 4 versus h h 4 =3.851+0.426h R 2 =0.849 (4.8)

The R2 determination coefficients are much higher for Philosophy than for Biology. The slopes are also quite different for these two research areas. Model 3, the geometric distribution shows consistently the worst h' versus h fit. Model 4 offers the best h' versus h fit for both areas studied. For further analysis, scores of all four h' variants are normalized as h i /h [in %], as presented in Table 5 and Table 6.

Table 5. The h i /h ratios for ten philosophy researchers (in %).

Researcher’s #

h 1 /h

h 2 /h

h 3 /h

h 4 /h

1

100.0

100.0

100.0

100.0

2

100.0

100.0

100.0

100.0

3

100.0

100.0

100.0

100.0

4

87.5

84.4

81.3

87.5

5

93.9

93.9

93.9

100.0

6

88.6

88.6

88.6

97.8

7

97.8

97.8

97.8

97.8

8

96.0

96.0

96.0

100.0

9

94.7

92.1

88.2

97.4

10

97.6

97.6

97.6

98.8

Average

95.6

95.0

94.3

97.9

Table 6. The h i /h ratios for ten biology researchers (in %).

Researcher’s #

h 1 /h

h 2 /h

h 3 /h

h 4 /h

1

50.0

50.0

50.0

87.5

2

41.2

41.2

35.3

47.1

3

60.0

64.0

60.0

80.0

4

28.1

31.3

21.9

34.4

5

46.3

39.0

29.3

51.2

6

48.9

46.7

40.0

51.1

7

30.5

37.3

35.6

61.0

8

47.6

47.6

38.1

57.1

9

36.0

33.3

25.3

41.3

10

31.0

28.7

19.5

43.7

Average

42.0

41.9

35.3

55.4

The results in Table 5 and Table 6 suggest a strong difference in h i /h ratios between the Philosophy and Biology fields. This is confirmed by the results of two-way ANOVA analysis presented in Table 7 (significance established at the 5% level).

Table 7. ANOVA results: research fields versus individual researchers for four h-index variants.

Comparison

Fields

Persons

Df

F-stat

P-value

Significant?

Df

F-stat

P-value

Significant?

h 1 /h

1

288.63

<0.001

Yes

9

1.57

0.256

No

h 2 /h

1

309.00

<000.1

Yes

9

2.05

0.150

No

h 3 /h

1

309.92

<000.1

Yes

9

2.54

0.090

No

h 4 /h

1

78.85

<000.1

Yes

9

1.61

0.246

No

Table 7 demonstrates that despite differences among individual researchers in the both fields analyzed, the only significant difference is in those contrasting fields.

4.3. Prediction Comparisons

Regression Equations (4.1) through (4.8) allows for estimation of the values of the four h i variants corresponding with some arbitrary values of the h-index for both the Philosophy and Biology areas. Table 8 shows the result of such exercise.

Table 8. Predicted values of four h-index variants for arbitrarily selected four values of h-index for the philosophy (Phil) and biology (Biol) areas.

Starting h-index value

h 1

h 2

h 3

h 4

Phil

Biol

Phil

Biol

Phil

Biol

Phil

Biol

25

23

11

23

11

23

10

24

14

50

47

19

47

19

46

15

49

25

75

71

27

71

26

70

20

73

35

100

95

35

94

33

93

25

98

42

As expected, the h 4 variant based on a modified harmonic distribution of shares is the most generous of all for in terms of acknowledging contributions of secondary authors of a publication, a seemingly positive characteristic.

4.4. Sensitivity Analysis

Analysis performed for two contrasting fields demonstrate that the h-index alone is insufficient to properly represent authors’ productivity and impact. An additional index needs to be fair to both theoreticians that mostly work alone and those who work on teams.

The first case is virtually unaffected by any h-index modification as the h' score reductions are typically below 10%, as the Philosophy area analysis indicates. For Biology, the differences between h-index and any of its variants are significant.

Publications coming from mega projects involving hundreds or thousands of coauthors deserve special consideration. Under the h-index methodology, if a number of authors is, say, 500 and the number of citations of their publication is 5000, each of the coauthors receives 5000 citations contributing to their h-index. The total number of citations resulting from that publication is 250,000, a greatly inflated number, indeed. A strictly fractional division of those citations would leave each coauthor with a modest number of just 10.

In search for the preferred citation allocation model, a simple sensitivity test is performed for the hypothetical case of a publication that generated 5000 citations using all four h-index variants and varied number of authors and their position on the byline. This is presented in Table 9.

Table 9. Sample calculations of citation shares for four h-index variants under different authorship scenarios.

Number of Citations

Number of Authors

Byline Position

Citation Numbers Under Four Models

h 1

h 2

h 3

h 4

5000

10

8

500

90

5

625

5000

20

8

250

26

0

625

5000

50

20

100

0

0

250

5000

50

50

100

0

0

100

Calculations presented in Table 9 show that variants h 2 and h 3 would not work for cases with large numbers of authors as they would offer very small—if any—shares for coauthors who are not prominent on the publication byline. This is the main reason why both geometric citation distribution models are not flexible enough to be considered as a recommended h-index variant.

Arithmetic allocation models were not even considered here: for a publication with a large number of coauthors, the citation allocation for lead authors is unreasonably low in any version of an arithmetic credit allocation model. Also, statistical tests showed generally that the arithmetic allocation system has the worst agreement with the empirical data (Hagen, 2008).

Therefore, the real choice is between the h 1 the fractional allocation model and h 4 , the modified harmonic allocation model. The h 1 variant is not suitable for any non-alphabetic, contribution-based coauthor listing as it ignores their factual contribution to the publication, giving the equal shares to all coauthors. It can still be a default model for publications—that for whatever reason—need to maintain the strictly alphabetic listing system. However, in order to account for the value-added offered by respective co-authors, the 1/N share of the total citation count, the harmonic summation constant need to be applied to keep the sum of allocated citations consistent with the total citation count produced by model h 4 for non-alphabetic publication bylines.

5. The s-Index

To summarize the previous analysis of the four h-index variants, we can now identify the preferred versions of h-index variants: h 4 for non-alphabetic bylines and the harmonically modified h 1 model. Both versions will now receive a name of a s-index that is formally defined as follows:

A) For contribution-based, non-alphabetic authors’ listing: s-index is the highest number of publications s that have all at least s citations calculated as Cj/i, where Cj is the total number of citations of publication j and i is author’s position on the byline of that publication.

B) For alphabetic authors’ listings, s-index is the highest number of publications s that have all at least s citations calculated as Cj(1 + 1/2 + 1/3 + … + 1/N)/N where Cj is is the total numbers of citation of publication j and N is the total number of authors on the publication byline.

If any author has had a mix of publications with non-alphabetic and alphabetic bylines, the respective calculations follow formulation A or B, as logical. Also in the case when a publication has a hybrid version that has a byline starting with the non-alphabetic, contribution-based order and then switches into an alphabetical listing of secondary authors, the appropriate allocation of the paper citations Cj to the ith author belonging to the non-alphabetic lead authors will be Version A with allocation Cj/i. If the author is already on the alphabetic portion of the byline, Version B logically applies.

Coauthorship inflation issue deserves a brief comment here. It is known that if N = infinity, the sum 1/i also reaches infinity. However, even in a mega publication with 3000 authors, an arbitrary number of 1000 citations will need to be multiplied by the harmonic sum of 8.5838 producing a total of 8584 citations, a far cry from 3000 × 1000 = 3 million citations under the current h-index counting.

6. Research Leadership Share

The proposed s-index is not developed to replace the h-index. The s-index relies on the same brilliant idea of Hirsch’s h-index to combine productivity and impact in one simple term. The s-index is to accompany the h-index as an additional, insightful measure that addresses the issue of the multiple authorship and its impact. For any researcher, comparison between their h-index and s-index can offer a valuable insight into the typical role that they play: whether that is a role mainly of a leader or mainly of a follower. To quantify this issue, a simple ratio s-index/h-index called Research Leadership Share is introduced, Equation (6.1):

RLS = s-index/h-index(6.1)

We define the Research Leadership Share (RLS) as the ratio of a researcher’s s-index to their h-index (RLS = s/h). This metric captures the overall pattern of a researcher’s leadership across their body of work, rather than assessing any single publication. Leadership is defined as making the primary intellectual, conceptual, or organizational contributions to research and resulting publications. In ethically grounded authorship practices supported by the obligatory CRediT statements, such leading roles are typically reflected by byline positions at or near the top of the author list, while more peripheral roles appear further down. A higher RLS suggests that a researcher’s cumulative citation impact derives primarily from work in which they played a central, driving role. Conversely, a very low RLS may indicate that their influence stems largely from coauthored work where their contribution was merely supportive, marginal, or even symbolic.

Practice could show what values of RLS should be labeled as high, medium or low. As a start, RLS of 60% could be labeled “high”; over 80%—very high. RLS below 30% should be labeled “low”.

Generally, research areas with large teams would face a harder task to achieve a high value of RLS. However, even on large research teams, there are leaders as well as secondary—or even marginal—contributors. Table 10 presents the results of s-index and RLS calculations for the world top researchers as of January 2025. (World 2,424,675 Scientists H-Index Ranking 2025; adscientificindex.com/h-index-rankings/)

Table 10. Global ranking comparison by h-index, s-index, and RLS—January 2025 Google Scholar data.

World Rank

Researcher

h-index

s-index

RLS (s/h)

1

CERN Physicist A

366

36

0.10

2

CERN Physicist B

356

16

0.04

3

CERN Physicist C

348

9

0.03

4

Biologist

346

221

0.64

Jorge Hirsch

112

104

0.93

The results demonstrate highly comparable, very high h-index scores for the world’s top four scientists. However s-index scores calculated for those researchers using the publicly accessible Google Scholar data show a dramatically different picture. All three CERN researchers have a lot of publications as members of various consortia but a very limited record of publications where their name would appear at or near the top of a publication byline. Despite many highly cited publications, the very large size of the contributing teams produces an s-index core that is dramatically smaller than their overinflated h-index core. In stark contrast, the biologist listed as world’s scientist #4 who works often on research teams—and frequently assumes lead roles on those teams—has a high h-index score and also high s-index and RLS scores. As a reference, Jorge Hirsch, a theoretical physicist and the inventor of the h-index has an impressive RLS of 93%.

Analysis if data presented in Table 10 leads to firm rejection of Hypothesis Ho that a researcher’s ranking is not significantly different when using the h-index vesus a coathorship-adjusted index like the s-index.

Under a fair research accounting system, some of the world top 10 researchers would place very far from the top—a situation that requires serious attention and corrective measures by the world scientific community.

7. Discussion

7.1. Usefulness of the S-Index and Research Leadership Share

Any attempt to recommend a new h-index variant for general use should be examined against the set goals for that index listed in this publication’s introduction.

  • Ethical appeal of credit allocation: Let us use as a reference the list of desired model attributes regarding ethically acceptable solutions to the coauthor problem listed in (Berker, 2018). Under the s-index regime, the coauthor’s credit is: 1) strictly positive: (all coauthors receive some positive credit for a publication); 2) proportionally relative (credit is assigned according to the relative contribution importance); 3) empirically funded (empirical data has shown that “among four counting methods, harmonic counting best represents human expectations of authors contribution” (Hagen, 2008); 4) parsimonious (it is based on a single simple rule); 5) independent of the lower-ranked coauthors (the raw, non-standardized harmonic 1/i distribution rule assures that explicitly); and 6) relatively non-inflationary (a strictly non-inflationary version would mean that the total credit cannot increase with the number of authors which would be contrary to the logical and ethically acceptable notion of the “value added” to the publication beyond the capacity of the single author).

  • Simplicity: The h-index takes all relevant citation counts into the h2 core calculation. The s-index only requires dividing all actual citation counts by i, the author’s position on the byline for each relevant publication to establish the s2 core similar to the h2 core. This will apply to the vast majority of cases with contribution-based byline listings. For strictly alphabetic listings, the harmonic sum multiplier needs to be applied to otherwise simple Cj/N calculation. It is worth noting that if an author has many papers under the same alphabetic consortium list, their harmonic sum multiplier will be the same “magic number” (e.g., for the consortium’s N = 1000 authors, that multiplier will be a consistent value of 7.485).

  • Fairness: The s-index appears as an acceptable compromise in allotting publication credit among coauthors starting with the assumption of the factual representation of the publication contributions by coauthors by their position on the byline. An important feature of the contribution-based non-alphabetic listings is the fact that the first author always receives the full credit—the same as the single author. Additional credits follow a logical sequence of weights that goes down as the byline expands. For example, author #4 gets 25% of the credit while author #25 gets only 4% of credit. Importantly, unlike geometric models, the harmonic allocation models are not too steep, leaving meaningful contribution shares even for authors down the byline if the publication is important enough to generate a lot of citations. Even for the alphabetic listings of the publications produced by the large consortia, the harmonic sum multiplier brings each author a better chance of all allocation formulas studied that their share of the citations will be helpful in expanding their s-index.

  • Suitability across different research fields: The paper demonstrates the equalizing opportunity of the s-index among diversified fields of research. As shown, philosophers’ publications are typically generating smaller numbers of citations than the team-based publications by biologists. This leads to the overall h-index scores that are generally much higher for biologists than for philosophers. However, an average philosopher’s h-index of 100 would translate into an s-index of 98. For an average biologist, an h-index of 100 translates into an s-index of only 42. Thus, for the average philosopher to have an s-index of 100, they would have to reach an h-index of 102, an exceptionally high number for that field. However, for the average biologist, to reach a comparable s-index of 100, they would have to reach an h-index score of 238, an exceptionally high score for that field. Thus, reaching s = 100 appears similarly challenging for either of those highly contrasting fields. The equalizing property of the s-index should work well also for other research fields.

  • Ability to discourage unethical publication contribution practices: This is a very important advantage of the s-index. The current research accounting system almost encourages researchers’ participation in multiple—even if only marginal—contributions to many projects and resulting publications—as all contributions count the same as lead contributions but involve significantly less time and effort. Under the s-index concept, that “smart” strategy that worked well for the increase of the h-index, is no longer beneficial: it is actually counterproductive. A marginal contribution at the end of the byline by a “ghost” or “honorary” contributor may translate into increase in h-index but would be a too small citation share to be relevant toward the overall s-index score. Thus, the stable s-score divided by increased h-score would only contribute to lowering one’s RLS = s/h, not a desired change on one’s profile.

  • Ability to encourage leadership role in research: s-index adequately rewards research leaders; these appearing as #1 authors on the publication byline routinely receive full credit independently of the number of coauthors. Those listed near the top of the byline are also rewarded well. With the now-required listing of the actual contributions to the publication by all coauthors, the byline orders do not seem to be biased. An inspection of the few dozen Science articles in the January-May 2025 period shows that the corresponding authors are most commonly also the lead authors. This seems to contradict some notions of giving some special publication credits to corresponding authors who appear as last on the byline. The CRediT reporting rules appear to solve this problem in an increasing number of journals where the justification for one’s position on the byline needs now to be explicitly supported and approved by all members of the team. Table 10 suggests that research leaders identified through the h-index accounting procedure do not need to be research leaders in the normal sense of that word: being just on long coauthors’ lists may be enough. Thus, if the h-index-based rankings are now found to be highly problematic, the idea of h-alpha of assigning the entire publication credit to the author with the highest h-index score, as suggested in (Hirsch, 2019), would not appear fair in many cases. Rather, it may create an example of the “Matthew effect” where undeserved credit is replenished further.

  • Ability to review and revise scientific ranking. This is the very issue that prompted this publication. The very idea of publishing joint world ranking of researchers of all fields is a problematic idea to start with: differences among fields, research practices in different geographic settings may impact the results. Yet, people like rankings, and research is no exception. To many, the Google Scholar rankings of researchers appear fair as they are based on numbers rather than on opinions. However, they are based exclusively on the h-index accounting procedures that have inherent bias of citation inflation. Table 10 demonstrated the huge scale of that bias when s-indices are presented alongside the h-indices. The perception of who are the real research leaders changes as a result. This confirms the usefulness of both s-index and RLS not only within the Google Scholar rankings but also in rankings of candidates for academic positions and honors.

7.2. Defending Sum of Shares >100% of the S-Index

Vast majority of h-index variants stick with standardized models requiring that the sum of coauthors contribution shares is strictly 100%. S-index represents a major departure from that common rule. There several good reasons why the non-standardized version of the s-index may actually be superior to any standardized version of the coauthor share allocation scheme.

1) Ability to Capture “Value-Added” through Collaborations. Normalization forces coauthor shares to always sum to 1.0, treating collaboration as a “zero-sum-game”. But in reality, additional coauthors often strengthen the paper through complementary expertise, cross-disciplinary insights, or broader credibility. Expertise coming from a coauthor representing emerging field of science was not even recognizable in the past—but may be indispensable today. Allowing shares to exceed 100% acknowledges that the whole can be more than the sum of its parts—a principle rooted in sociology, systems theory, and even team science.

2) Reflection of Real Research Dynamics. Modern science is increasingly collaborative and interdisciplinary. Research teams generate intellectual output that is not reducible to the work of a single “average” author. Non-normalized shares better reflect this collective intellectual surplus by not artificially suppressing the contribution credit of each coauthor.

3) Generosity without Overinflating. In normalized schemes, as the number of coauthors grows, each author’s fractional credit automatically shrinks. This creates a bias against collaboration and large-scale team science. The s-index concept avoids this problem. Unlike full counting (where credit = 100% per coauthor), the s-index down-weights contributions harmonically. Yet, unlike standardized schemes, it does not strictly “cap” collective output, striking a reasonable balance between “overly generous” and “overly stingy” credit allocation among coauthors.

4) Better Match with the Perceived Coauthor Credit. Surveys show that researchers often perceive their own (and sometimes also their coauthors’) contributions as higher than normalized shares would assign. The non-normalized models like s-index aligns more closely with real perceptions of fairness in credit allocation—this makes it sociologically more valid and acceptable to scholars.

5) Consistency with Multidimensional Contribution Accounting. CRediT system currently used in journals treats publication work as set of orthogonal roles (conceptualization, data curation, writing, etc.). Because roles are separate dimensions, there is no forcing a single 100% total across the team of coauthors.

7.3. Limitations of This Research

The s-index efficacy and applicability are demonstrated in this paper using limited data sets from only two contrasting disciplines. In order to have right to generalize the findings beyond this research, a larger-scale validation will be necessary.

In addition, the proposed thresholds for RLS values (e.g., >60% for “high”, <30% for” low”) are based on Google Scholar records examined for this publication . They will also require empirical validation with a larger datasets representing variety of research fields.

8. Conclusion

As scientific collaboration continues to evolve—often in the direction of larger teams and more complex authorship structures—the limitations of existing bibliometric tools like the h-index have become increasingly clear. While the h-index remains a convenient and intuitive measure of productivity and impact, it fails to distinguish between lead contributors and peripheral co-authors. This property of the h-index creates a structural incentive for strategic co-authorship and inflates scholarly rankings in ways that are ethically problematic and misleading to all who can easily access the Google Scholar data and may not be familiar with the way the h-index is actually established.

To address these challenges, this paper analyzed four variants of the h-index in search of a new variant that would meet criteria of simplicity, interdisciplinary applicability, conceptual attractiveness, and ability to supplement the h-index with additional insight into biases resulting from the current accounting of research efforts by members of large teams. The s-index based on a modified harmonic allocation model was found the best among the four variants analyzed.

The analysis of a researcher’s s-index together with the routine h-index allows for calculation of the Research Leadership Share (RLS) ratio, which quantifies a researcher’s typical role in their research efforts and resulting publications. These metrics preserve the conceptual clarity of the h-index while restoring ethical fidelity by recognizing the true authorship position and collaboration value.

Empirical analysis of researchers from both high-collaboration and solo-intensive disciplines demonstrates that the s-index acts as a disciplinary equalizer, correcting inflated metrics in fields dominated by mega-authorship without penalizing genuine contributors. Furthermore, the RLS provides an interpretable and scalable indicator of intellectual leadership, with potential utility in hiring, promotion, and funding decisions.

We recommend that major academic platforms such as Google Scholar, Scopus, and ORCID incorporate both the s-index and RLS alongside the h-index in researcher profiles. Doing so would promote fairness in academic credit allocation, discourage unethical authorship inflation, elevate true research leadership, and support early-career scholars in building transparent academic reputations.

Obviously, there is no perfect index—and s-index has not been widely tested yet. Additional work is warranted to test universality of this simple index. Some fine-tuning for specific applications is possible. However, the s-index’s logical clarity and computational simplicity that can easily lead to straightforward, routine algorithms able to display s-index alongside h-index and i10-index on the Google Scholar—may prove that keeping this index intact for broad international comparisons and rankings will be worthwhile.

We feel that the time has come to shift the scholarly evaluation paradigm from quantity-based citation counting to contribution-sensitive leadership metrics. The s-index and RLS offer a practical, ethical, and computationally simple path toward that goal.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Aad, G. et al. (2012). Observation of a New Particle in the Search for the Standard Model Higgs Boson with the Atlas Detector at the LHC. Physics Letters B, 716, 1-29.
[2] Alonso, S., Cabrerizo, F. J., Herrera-Viedma, E., & Herrera, F. (2010). Hg-Index: A New Index to Characterize the Scientific Output of Researchers Based on the H-and G-Indices. Scientometrics, 82, 391-400. [Google Scholar] [CrossRef]
[3] Berker, Y. (2018). Golden-Ratio as a Substitute to Geometric and Harmonic Counting to Determine Multi-Author Publication Credit. Scientometrics, 114, 839-857. [Google Scholar] [CrossRef]
[4] Bornmann, L., Mutz, R., Hug, S. E., & Daniel, H. (2011). A Multilevel Meta-Analysis of Studies Reporting Correlations between the H Index and 37 Different H Index Variants. Journal of Informetrics, 5, 346-359. [Google Scholar] [CrossRef]
[5] Burrell, Q. L. (2007). On the H-Index, the Size of the Hirsch Core and Jin’s A-Index. Journal of Informetrics, 1, 170-177. [Google Scholar] [CrossRef]
[6] Chawla, D. S. (2023). Unearned Authorship Pervades Science. Nature. [Google Scholar] [CrossRef]
[7] Crispo, E. (2015). A New Index to Use in Conjunction with the H‐Index to Account for an Author’s Relative Contribution to Publications with High Impact. Journal of the Association for Information Science and Technology, 66, 2381-2383. [Google Scholar] [CrossRef]
[8] Donner, P. (2020). A Validation of Coauthorship Credit Models with Empirical Data from the Contributions of PHD Candidates. Quantitative Science Studies, 1, 551-564. [Google Scholar] [CrossRef]
[9] Egghe, L. (2008). Mathematical Theory of the H‐ and G‐Index in Case of Fractional Counting of Authorship. Journal of the American Society for Information Science and Technology, 59, 1608-1616. [Google Scholar] [CrossRef]
[10] Egghe, L. (2011). The Hirsch Index and Related Impact Measures. Annual Review of Information Science and Technology, 44, 65-114. [Google Scholar] [CrossRef]
[11] Galam, S. (2011). Tailor Based Allocations for Multiple Authorship: A Fractional GH-Index. Scientometrics, 89, 365-379. [Google Scholar] [CrossRef]
[12] Hagen, N. T. (2008). Harmonic Allocation of Authorship Credit: Source-Level Correction of Bibliometric Bias Assures Accurate Publication and Citation Analysis. PLOS ONE, 3, e4021. [Google Scholar] [CrossRef]
[13] Hagen, N. T. (2010). Harmonic Publication and Citation Counting: Sharing Authorship Credit Equitably—Not Equally, Geometrically or Arithmetically. Scientometrics, 84, 785-793. [Google Scholar] [CrossRef]
[14] Hirsch, J. E. (2005). An Index to Quantify an Individual’s Scientific Research Output. Proceedings of the National Academy of Sciences, 102, 16569-16572. [Google Scholar] [CrossRef]
[15] Hirsch, J. E. (2010). An Index to Quantify an Individual’s Scientific Research Output That Takes into Account the Effect of Multiple Coauthorship. Scientometrics, 85, 741-754. [Google Scholar] [CrossRef]
[16] Hirsch, J. E. (2019). Hα: An Index to Quantify an Individual’s Scientific Leadership. Scientometrics, 118, 673-686. [Google Scholar] [CrossRef]
[17] Hodge, J. E., & Greenberg, S. J. (1981). A Harmonic Approach to Author Credit Allo-Cation. Science, 212, 1234-1235.
[18] Ioannidis, J. P. A., Klavans, R., & Boyack, K. W. (2018). Thousands of Scientists Publish a Paper Every Five Days. Nature, 561, 167-169. [Google Scholar] [CrossRef]
[19] Jin, B., Liang, L., Rousseau, R., & Egghe, L. (2007). The R-and AR-Indices: Complementing the H-Index. Chinese Science Bulletin, 52, 855-863. [Google Scholar] [CrossRef]
[20] Leeming, J. (2019). How to Manage a Multi-Author Megapaper. Nature, 575, S36-S37. [Google Scholar] [CrossRef]
[21] Liu, X. Z., & Fang, H. (2012). Modifying H-Index by Allocating Credit of Multi-Authored Papers Whose Author Names Rank Based on Contribution. Journal of Informetrics, 6, 557-565. [Google Scholar] [CrossRef]
[22] Ronda-Pupo, G. A., & Katz, J. S. (2021). The Power Law Relationship between Citation Impact and Multi-Authorship Patterns in Articles in Information Science & Library Science Journals. Scientometrics, 114, 919-932. [Google Scholar] [CrossRef]
[23] Schreiber, M. (2008). To Share the Fame in a Fair Way: H-Index by Fractional Counting. Scientometrics, 76, 471-480.
[24] Schreiber, M. (2009). A Case Study of the Modified Hirsch Index Hm Accounting for Multiple Coauthors. Journal of the American Society for Information Science and Technology, 60, 1274-1282. [Google Scholar] [CrossRef]
[25] Sidiropoulos, A., Katsaros, D., & Manolopoulos, Y. (2007). Generalized Hirsch H-Index for Disclosing Latent Facts in Citation Networks. Scientometrics, 72, 253-280. [Google Scholar] [CrossRef]
[26] Soliman, A. (2025). ‘Precocious’ Early-Career Scientists with High Citation Counts Proliferate. Nature, 637, 525-526. [Google Scholar] [CrossRef]
[27] Tscharntke, T., Hochberg, M. E., Rand, T. A., Resh, V. H., & Krauss, J. (2007). Author Sequence and Credit for Contributions in Multiauthored Publications. PLOS Biology, 5, e18. [Google Scholar] [CrossRef]
[28] Waltman, L. (2012). An Empirical Analysis of the Use of Alphabetical Authorship in Scientific Publishing. Journal of Informetrics, 6, 700-711. [Google Scholar] [CrossRef]
[29] Waltman, L. (2016). A Review of the Literature on Citation Impact Indicators. Journal of Informetrics, 10, 365-391. [Google Scholar] [CrossRef]
[30] Wilcox, C. (2024). How Easy Is It to Fudge Your Scientific Rank? Meet Larry, the World’s Most Cited Cat. Science. [Google Scholar] [CrossRef]
[31] Zhang, C. (2009). The E-Index, Complementing the H-Index for Excess Citations. PLOS ONE, 4, e5429. [Google Scholar] [CrossRef]

Copyright © 2026 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.