Curbing the Ramifications of Authorship Abuse in Science
By: Md Somir Khan, Mehmet Engin Tozal
Potential Business Impact:
Fairly counts who really helped with science work.
Research performance is often measured using bibliometric indicators, such as publication count, total citations, and $h$-index. These metrics influence career advancements, salary adjustments, administrative opportunities, funding prospects, and professional recognition. However, the reliance on these metrics has also made them targets for manipulation, misuse, and abuse. One primary ethical concern is authorship abuse, which includes paid, ornamental, exploitative, cartel, and colonial authorship. These practices are prevalent because they artificially enhance multiple bibliometric indicators all at once. Our study confirms a significant rise in the mean and median number of authors per publication across multiple disciplines over the last 34 years. While it is important to identify the cases of authorship abuse, a thorough investigation of every paper proves impractical. In this study, we propose a credit allocation scheme based on the reciprocals of the Fibonacci numbers, designed to adjust credit for individual contributions while systematically reducing credit for potential authorship abuse. The proposed scheme aligns with rigorous authorship guidelines from scientific associations, which mandate significant contributions across most phases of a study, while accommodating more lenient guidelines from scientific publishers, which recognize authorship for minimal contributions. We recalibrate the traditional bibliometric indicators to emphasize author contribution rather than participation in publications. Additionally, we propose a new indicator, $T^{\prime}$-index, to assess researchers' leading and contributing roles in their publications. Our proposed credit allocation scheme mitigates the effects of authorship abuse and promotes a more ethical scientific ecosystem.
Similar Papers
Gaming the Metrics? Bibliometric Anomalies and the Integrity Crisis in Global University Rankings
Digital Libraries
Finds fake research to improve university ratings.
Shifting norms in scholarly publications: trends in readability, objectivity, authorship, and AI use
Digital Libraries
Researchers write more, cite more, use AI.
Research impact evaluation based on effective authorship contribution sensitivity: h-leadership index
Digital Libraries
Fairly judges scientists' work in big teams.