The Consistency Measurement in Document Publications and Citation Using t-Index

Abstract

Recent time handling uncertainty and its measurement is considered as one of the major issues by data science and applied mathematics researchers. It becomes more complex when the dynamicity exists in data sets. One of the suitable examples is Scopus data sets which changes every time. In this case, precise measurement of consistency in document and citation publications is considered as one of the issues. It becomes more complex when the parameter like h-index and document count can be also manipulated over the period of time. To resolve this issue, a time-based index called as “t-index” is illustrated in this paper with an example. This method measures the randomness in document publication and citation using the average h-index and its entropy measurement.

Share and Cite:

Singh, P. (2022) The Consistency Measurement in Document Publications and Citation Using t-Index. Journal of Applied Mathematics and Physics, 10, 1000-1011. doi: 10.4236/jamp.2022.103067.

1. Introduction

Recent time, the research performance analysis of any researchers or institute has become crucial tasks. It became more complex when this analysis depends on large data base like Scopus. The parameters like h-index somehow try to fix this issue [1]. However several limitations found in h-index later in case of multiple co-authors or large number of documents publication h-index can be manipulated [2]. It provided way for other metrics like i10-index [3], g-index [4], e-index [5], s-index [6], as well as other metrics [7] [8] [9] [10]. Recently, these metric was analyzed using the Scopus data set [11] [12] [13] and found that they affect impact factor in significant ways [14] [15]. In this process a problem arises while measuring the consistency of any institute beyond the document publications and citation. The reason is every institute has fix number of authors, lab and infrastructure which can produce certain results in given period of time for the publications. However some of the institute tried to manipulate the system to earn more money or continue education as business rather than research. To achieve this goal, the paper is published as multiple authors, co-author name without having any expertise. In this case problem arises while investigation of founding institute or author for the given concept in case of technical paper. The review papers also create issue as the review paper gets more citation which distributed equally among them. Due to which, problem arises in analysis of same h-index institute or author or low h-index or higher h-index institute. It is reported that many authors have less h-index and citation but received Nobel Prize. Hence the precise measurement of consistency is major issues by research communities. This paper focused on controlling this issue using average h-index and its entropy measurement.

To measure the randomness in document publications and citation entropy theory [16] is used in this paper. One of the reason is this theory is considered as one of the effective methods for randomness and uncertainty analysis [17] [18]. This paper tried to connect the entropy theory to measure the uncertainty and randomness in document publications and citations by monkey and ghost researchers [19]. These types of researchers tried to manipulate the system using co-authors [20]. It can be observed via number of papers and co-authors. These types of authors have much number of co-authors but less number of papers. As reflected in Scopus. Same time they have more than 100 distinct areas of expertise which seems impossible. Some time they tried to increase the citation in dynamic way via their organized conference. It can be observed by document publication and its citation trend based on time basis [21]. One of the examples is these authors may publish more than 200 papers in Scopus per year. It means per week day almost a paper which looks infeasible. These become more complex when posthumous, honorable authors name is added to get document count and citation. One of the reasons for this type of acts is that every author used to get same document count, citation and h-index which impact their intellectual measurement [22] [23]. It becomes more crucial when the papers are retracted from the Scopus. The problem arises with its document count and citation while intellectual measurement. Hence the impact of work is more necessary than impact of journal for intellectual measurement. This issue become more complex while analyzing the current research trends or domain based expert for the multi-decision process to stop brain drain [24]. These things happened because the quality of document publications and citation is matter of Turiyam [25]. The reason is document publications and citation used to increase or decrease based on domain rather than technicality of papers [26] [27] [28]. It also depends on types of papers like review paper get more citation than technical papers and domains wise [26]. Hence the consistency of work should be measured rather than impact of journal, citation, document count, or h-index [29]. It becomes more crucial when the papers are retracted from the Scopus [30]. These studies motivated the author to introduce a method based on Shannon entropy and average h-index based on time. The objective is find some alternative way to measure the randomness in document publications and citation as shown in Figure 1.

One of the significant outcomes of the proposed method is that it provides a way to characterize the consistent and inconsistent performance of any institutes.

Remaining part of the paper is structured as follows: Section 2 provides preliminaries about h-index and other metrics related to this paper. Section 3 contains

the proposed method with its illustration in Section 4 followed by conclusions, acknowledgement and references.

2. Preliminaries

In this section, some of the related metric for the t-index is explained for better understanding:

Definition 1: (h-index) [1]: It is defined as, the n research paper of an author has more than n-number of citations which can be investigated using the algorithm shown in Table 1. The limitation of this index arises when multiple co-authors arises. Same time highly cited paper after sometime become irrelevant. It means the h-index does not provide precise analysis based on time based citation analysis and its influence measurement. To resolve this issue mock h-index is introduced.

Definition 2: (Mock h-Index) [9]: It is introduced to measure the quantity which is statistically similar to h-index and has dimensions same as h-index.

h m = ( C 2 P ) 1 3

It can be observed that this index also does not provide any analysis based on time based on its randomness measurement.

Definition 3: (m-quotient) [7]: It is defined as mh/n where n is the number of years passed since the first publication of the author. This indexing somehow tries to fix the large citation. However the small change in h-index affects the large changes in m-index. Same time this indexing unable to measure the

Figure 1. The objective of the current paper in graphical way.

randomness in citation. To deal with it Shannon entropy is considered as useful [6]. This paper focused on Shannon entropy to introduce it for measuring the randomness in citation.

Definition 4: (Entropy) [16]: It measures the randomness or uncertainty in the given data set as average information content based on uniformity of a distribution as follows:

H = i = 1 N P ( x i ) log P ( x i )

where P is the probability distribution function of the random variable xi. Recently, it is applied for uncertainty measurement for data analysis. This paper focused on measuring the randomness in citation based on time window. To achieve this goal, a method is proposed in the next section.

3. Proposed Method (t-Index)

In this section a method is proposed to measure the randomness in citation and its measurement using the entropy theory. Let us suppose, an author received ci number of citations in N years, for the paper published in the ith year of his research career. In this case, the entropy can be computed as:

T ( Time based citation ) = i = 1 N P ( c i ) ln P ( c i ) , for c i > 0

where P ( c i ) = c i / C t , and C t is the number of total citations received by the author. Although entropy characterizes the uniformity of the distribution, but we need to normalize its value to make it comparable across different distributions, for which we divide it by the factor:

T = ln ( 10 N ) , for N 1

where N is the number of years in the academic career of a researcher, and is characterized by the difference in years between first publication and the last publication of the author. Since 0 P ( c i ) 1 , so the value of T is very small, we scale it up by using inverse of logarithm, that is the natural exponential function. Thus, we have a quantity that gives us the measure of uniformity in the yearly distribution of citation, i.e.,

u = e T / T

It can also be interpreted as research consistency of an individual over the years. Now it can be refined using the time frame as follows:

t = { 0 , for N = 0 4 h ¯ y e T / T , for N 1

where T = i = 1 N P ( c i ) ln P ( c i ) , for P ( c i ) = c i / C t ,

and, T = ln ( 10 N ) ,

and, h ¯ y = ( i = 1 N h i ) / N ,

and, Ct = total number of citations,

and, ci = number of citations in the ith year,

and, hi = value of h-index in the ith year,

and, N = number of years in the academic/research career of the individual i.e., difference in years between first publication and last publication.

Where, 4 is an arbitrary constant of choice. It is used to scale the value of t which can be changed based on user requirement. The reason is most of the time expert wanted to measure the performance based on last 3 to 4 years. In this way any one can evaluate distinct t1, t2 for the distinct time frame in which two cases arises as: i) t1 = t2: It means the performance of chosen authors or institute is consistent. ii) t1 > t2 or vice versa: It means the individual performance is somehow better in t1 time frame and vice versa.

It means the t-index will be higher in case citations for each research paper will be greater than the number of papers published in that year whereas ­h-index used to be unaffected. In this way, one can easily approximate the lower bound of t-index as zero in case zero publication. The upper bound of t-index can be approximated using the complexity of entropy with n possible values has an upper bound of log n , therefore T log N as shown in Figure 2. In the next section the proposed method is illustrated using computer science data sets collected for some of the institutes using Scopus. The comparison among t-index and h-index for the same institutes is also given for better understanding.

4. Illustrations

This paper introduced the measurement of citation using entropy theory and time based h-index using the data set shown in [12]. The data analysis is done using pandas library from Python as discussed detail in [26]. The reason is h-index can be manipulated using multiple co-authors and random citation [29] [30]. To resolve this issue a method is proposed in Section 4 called as t-index. The h-index of some Indian institutes and its t-index computation is shown in Table 1. It can be observed that, t-index is higher even for lower values of h. It means the institutes having consistent performance over the year and does not contain randomness in citation or document publication includes higher t-index which cannot be identified via h-index.

Figure 2. Growth of t-index for yearly h = 12 consistent in 50 years.

Table 1. The h-index and t-index values of institutes in computer science research.

The following information can be extracted from Table 1 and Figure 3.

1) The t-index is higher in case of less randomness and uncertainty in document publications even though h-index is low. It means the t-index measures the consistency among document publications and citation. It does not affected by older and younger issues which happened in case of h-index.

2) It can be observed that IIT Hyderabad and IISC Bangalore have almost equal t-index. It means IIIT Hyderabad is consistent in research estimation as equal to IISC Bangalore in the given span of time. However the IISC Bangalore contains maximum h-index when compared to IIIT Hyderabad.

3) The old universities like BHU, AMU, Mumbai, Madras, or Allahabad university t ?index is low even though their h-index is above the average h-index of country. It means these Universities has not worked consistently in the given academic span.

4) It can be observed that the IIT Delhi has less document count but they have highest t-index. It means they are consistent over the period. However IIT Kanpur has less t-index which means the IIT Kanpur is not consistent over the period. They got some good quality papers in the given period. In similar manner private VIT, Amity and Thapar are not consistent as per t-index whereas the Amrita, SRM and Sathyabama tried to be consistent. In similar way other institutes performance can be analyzed using t-index. The data can be taken from SCOPUS.

5) The proposed method shows that the consistency in research publication in the given period can be measured based on author per publications and its outcome.

In this way, the proposed method able to find the consistent document publication and citation based on lower h-index also as shown in Figure 3. It may

Figure 3. The t-index for consistent h-index = 4, 8 or 12 in the given 50 years.

help in controlling the brain drain [24]. However it fails to measure the retracted papers citations, multiple authors weight age, undomain papers, posthumous author papers, journal to journal citation, conference to conference citation, within organization citation and its consistency [26] [30]. Hence, the author will focus on solving following problems in near future:

1) Some time the non-indexed paper in Scopus also contains much quality than Scopus. It can be measured by novelty of work or may be citations. In this case the author will focus on measuring those non-indexed papers and its quality for performance measurement in future.

2) Some of the conference paper contains much quality than Journal papers also. In this case precise measurement of conference papers and its content for intellectual measurement is one of the crucial tasks.

3) There are many non-English papers contains much quality in Russian language, Chinese, German, Hebrew, Hindi, Parsi, Sanskrit, Bengali,Tamil and other languages in the world. These papers are not indexed in Scopus which quality and performance measurement is another issue. It means the measuring Linguistics diversity and its indexing in Scopus is another issue for the intellectual measurement rather than monopoly of English.

4) The regional, gender, and other factors for document and citations measurement is distinct issues which need to be addressed. The paper publications from the Scholar from MIT and a Small College of India in same Journal cannot be considered as equal intellectual measurement. However it requires a new metric to measure the regional, gender, or other factors to measure the performance of an individual or institutes.

5) The diversity of citation, awareness about citations and its work, content based citation, influenced citations, and technicality of work measurement is another challenge of the researchers. The reason is review paper received more citations whereas the technical paper may receive less citation. In this case the author of review paper should be considered as more intellectual or technical paper author is another challenge. It requires new metric to characterize the citation based on acceptation, rejection and uncertain regions as it depends on awareness of researchers.

6) Domain wise intellectual measurement of any institute or author is another issue for the researchers. One of the reasons is that the papers publications in mathematics domains are harder than chemistry or biology domain. Same time the number of Journals, number of working researchers, or demands of some domains is lesser when compared to other domains. In this case, the precise measurement of intellectual based on document count ranking of journal or citations is difficult tasks.

7) The impact of funded project, authors and collaboration while measuring the performance and intellectual is another issues. The reason is conflict arises while measuring the founding author or institute of given work.

8) The impact of inconsistency in document publications and citations forced the brain drain. The reason is researchers prefer impact of work rather than impact of Journal. This is another issue for the researchers to measure the quality of work rather than journal.

9) The precise measurement of retracted papers and its citation is another concern for the researchers while measuring the intellectual.

10) The impact factor just measures the two years of document publications and citations rather than generalization. It just predicts the current trend rather than quality of work. The reason is citation of paper is based on expert awareness rather than its quality. It is totally authors who cite the base paper of given area or not. Hence the citation is beyond the ranking of journal and its indexing. In this case, an alternative of impact factor and other metric like Altmetrics based performance measurement is another issues.

11) The unwanted citation and its measurement are other issues for the research communities. Some time researchers cite the unwanted papers rather than any founding or base papers. They never give reference to break through results as those papers are old and do not help the current impact factor of journal. Due to which, many researchers cited two year recent papers. The measurement of unwanted citation and its characterization is another issue while intellectual. One of the reasons is that the understanding of founding or breakthrough papers came after the hard work. It is totally based on human Turiyam rather than acceptation of keyword, rejection of keyword or uncertainty. Another issues arises when an author do not want that his/her paper should be cited. He/she wants that people read his method and get inspire for various applications. In this case the intellectual measurement is another difficult task which needs to be addressed.

It is believed that the current paper will be helpful for the research organization, Accreditation, NAAC, NBA and other agencies to measure the consistency and its impact of research.

5. Conclusion

This paper focused on measuring randomness and uncertainty in document publications and citation using Scopus data sets. To achieve this goal, a method is proposed using hybridization of time based h-index and the Shannon entropy. It is shown that the proposed method measure the consistency of two or more institutes in the given period unaffected from (low or high) h-index as shown Table 1. In future the author work will focus on introducing some other metric for depth analysis of analyzing the performance of any author institute using Scopus data set.

Acknowledgements

Author thanks the conference Team for the invitation as Keynote Speaker for 9th World Congress on Engineering and Technology (CET 2022) and a free registration.

Funding

Author sincerely acknowledges the research project on the same topic from Gandhi Institute of Technology and Management under Ref. No.: 2021/0050.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Hirsch, J.E. (2005) An Index to Quantify an Individual’s Scientific Research Output. Proceedings of the National Academy of Sciences, 102, 16569-16572. https://doi.org/10.1073/pnas.0507655102
[2] Costas, R. and Bordons, M. (2007) The h-Index: Advantages, Limitations and Its Relation with Other Bibliometric Indicators at the Micro Level. Journal of Informetrics, 1, 193-203. https://doi.org/10.1016/j.joi.2007.02.001
[3] Connor, J. (2011) Google Scholar Citations Open to All. Google Scholar Blog. https://scholar.googleblog.com/2011/11/google-scholar-citations-open-to-all.html
[4] Egghe, L. (2006) An Improvement of the h-Index: The g-Index. ISSI.
[5] Zhang, C.T. (2009) The e-Index, Complementing the h-Index for Excess Citations. PLoS ONE, 4, e5429. https://doi.org/10.1371/journal.pone.0005429
[6] Silagadze, Z.K. (2009) Citation Entropy and Research Impact Estimation. arXiv Preprint arXiv:0905.1039.
[7] Harzing, A.W. (2016) Reflections on the h-Index. Research in International Management. https://harzing.com/publications/white-papers/reflections-on-the-h-index
[8] Kosmulski, M. (2009) New Seniority-Independent Hirsch-Type Index. Journal of Informetrics, 3, 341-347. https://doi.org/10.1016/j.joi.2009.05.003
[9] Prathap, G. (2009) Is There a Place for a Mock h-Index? Scientometrics, 84, 153-165. https://doi.org/10.1007/s11192-009-0066-2
[10] Yong, A. (2014) Critique of Hirsch’s Citation Index: A Combinatorial Fermi Problem. Notices of the AMS, 61, 1040-1050. https://doi.org/10.1090/noti1164
[11] Gupta, B.M. (2010) Ranking and Performance of Indian Universities, Based on Publication and Citation Data. Indian Journal of Science and Technology, 3, 838-844. https://doi.org/10.17485/ijst/2010/v3i7.21
[12] Singh, P.K. and Singh, C.K. (2019) Bibliometric Study of Indian Institutes of Technology in Computer Science. Proceedings of Amity International Conference on Artificial Intelligence, Dubai, 384-393. https://doi.org/10.1109/AICAI.2019.8701422
[13] Kumar, C. and Singh, P.K. (2019) Scopus Based Comparative Analysis of Computer Science Research in India and USA. Proceedings of 10th International Conference on Computing, Communication and Networking Technology, Kanpur, 1-7.
[14] Manolis, A. (2012) Impact Factors and the Central Limit Theorem: Why Citation Averages Are Scale Dependent. Journal of Informetrics, 12, 1072-1088. https://doi.org/10.1016/j.joi.2018.08.011
[15] Smarandache, F. (2021) Improved, Extended and Total Impact Factor of a Journal. 1-4. https://arxiv.org/abs/2105.14186
[16] Shannon, C.E. (1948) A Mathematical Theory of Communication. Bell System Technical Journal, 27, 379-423. https://doi.org/10.1002/j.1538-7305.1948.tb01338.x
[17] Singh, P.K., Kumar, Ch.A. and Li, J.H. (2017) Concepts Reduction in Formal Concept Analysis with Fuzzy Setting Using Shannon Entropy. International Journal of Machine Learning and Cybernetics, 8, 179-189. https://doi.org/10.1007/s13042-014-0313-6
[18] Singh, P.K. and Gani, A. (2015) Fuzzy Concept Lattice Reduction Using Shannon Entropy and Huffman Coding. Journal of Applied Non-Classic Logic, 25, 101-119. https://doi.org/10.1080/11663081.2015.1039857
[19] Singh, P.K. (2020) Multi-Granular Based n-Valued Neutrosophic Contexts Analysis. Granular Computing, 5, 287-301. https://doi.org/10.1007/s41066-019-00160-y
[20] Hirch, J.E. (2019) hα-An Index to Quantify an Individual’s Scientific Leadership. Scientometrics, 118, 673-686. https://doi.org/10.1007/s11192-018-2994-1
[21] Singh, M., Patidar, V., Kumar, S., Chakraborty, T., Mukherjee A. and Goyal, P. (2015) The Role of Citation Context in Predicting Long-Term Citation Profiles: An Experimental Study Based on a Massive Bibliographic Text Dataset. Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, 1271-1280. https://doi.org/10.1145/2806416.2806566
[22] Ioannidis, J.P.A., Baas, J., Klavans, R. and Boyack, K.W. (2019) A Standardized Citation Metrics Author Database Annotated for Scientific Field. PLoS Biol, 17, e3000384. https://doi.org/10.1371/journal.pbio.3000384
[23] Ioannidis, J.P.A., Boyack, K.W. and Baas, J. (2020) Updated Science-Wide Author Databases of Standardized Citation Indicators. PLoS Biol, 18, e3000918. https://doi.org/10.1371/journal.pbio.3000918
[24] Singh, P.K. (2022) Complex Plithogenic Set. International Journal of Neutrosophic Sciences, 18, 57-72. https://doi.org/10.54216/IJNS.180106
[25] Singh, P.K. (2021) Data with Turiyam Set for Fourth Dimension Quantum Information Processing. Journal of Neutrosophic and Fuzzy Systems, 1, 9-23. https://doi.org/10.54216/JNFS.010101
[26] Singh, P.K. (2022) t-Index: Entropy Based Random Document and Citation Analysis Using Average h-Index. 127, 637-660. https://doi.org/10.1007/s11192-021-04222-4
[27] Amodio, P.L. and Scarselli, F. (2021) Implementation of the PaperRank and AuthorRank Indices in the Scopus Database. Journal of Informetrics, 15, 101206. https://doi.org/10.1016/j.joi.2021.101206
[28] Chen, M., Guo, Z., Dong, Y., Chiclana, F. and Herrera-Viedma, E. (2021) Citations Optimal Growth Path: A Tool to Analyze Sensitivity to Citations of h-Like Indexes. Journal of Informetrics, 5, 101215. https://doi.org/10.1016/j.joi.2021.101215
[29] Bi, H.H. (2022) Four Problems of the h-Index for Assessing the Research Productivity and Impact of Individual Authors. Scientometrics. https://doi.org/10.1007/s11192-022-04323-8
[30] Szilagyi, I.S., Schittek, G.A., Klivinyi, C., et al. (2022) Citation of Retracted Research: A Case-Controlled, Ten-Year Follow-Up Scientometric Analysis of Scott S. Reuben’s Malpractice. Scientometrics. https://doi.org/10.1007/s11192-022-04321-w

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.