```html
Inside the Circular Citation Economy and the Rise of “Reviewer Mills”
How self-citation cartels and coerced peer review are hijacking the metrics that decide careers, funding, and institutional prestige.
🔎 Forensic Feature — Citation Integrity
In the high-stakes world of academic publishing, a “Seal of Approval” from Clarivate’s Journal Citation Reports is the ultimate currency. For a researcher, it is a ticket to tenure; for a publisher, it is a licence to scale. But the metrics we trust are being hijacked by sophisticated internal networks.
As the boundaries of “prestige” are pushed by high-volume Open Access models, a landmark investigation has forced the industry to confront an uncomfortable reality. Originally sparked by the forensic work of Professor M. Ángeles Oviedo-García and subsequently updated to reflect evolving editorial corrections, the case of the Multidisciplinary Digital Publishing Institute (MDPI) serves as a masterclass in how modern publishing “whitelists” can be gamed.
The journey of this investigation itself mirrors the complexity of research integrity. The original 2021 study by Oviedo-García, which analysed 53 MDPI journals, underwent a significant formal correction in 2023. The record was retracted and replaced to refine the conclusions drawn from the data, ensuring the critique remained grounded in verifiable, cited sources rather than broad generalisations.
This “corrected” lens reveals a persistent structural vulnerability: MDPI journals sit on elite whitelists, yet they exhibit citation patterns that deviate sharply from industry leaders. This creates a “Prestige Paradox” where a journal’s formal rank may no longer reflect its actual adherence to traditional editorial rigour.
The core of the forensic audit focuses on the S-Rate (Self-Citation Risk). The data shows that MDPI’s self-citation rates are not only significantly higher than the top journals in their categories but are also heavily fuelled by an internal “Citation Cartel.”
However, the “smoking gun” moved from theory to fact in early 2024. Following a volunteer-led investigation by Predatory Reports, MDPI officially uncovered a “Reviewer Mill” operating within its own system. This was not just a matter of authors citing themselves; it was a coordinated effort by peer reviewers to force citations into the record.
This admission by MDPI confirms the “Behavioural Lens” we use at ResearchFace: when a system prioritises speed and volume, it creates an environment where reviewers can treat peer review as a personal “citation mint.”
One of the primary drivers of this vulnerability is speed. MDPI’s value proposition to researchers is a decision often rendered in under 30 days. While this looks like efficiency, the discovery of “Reviewer Mills” suggests it may actually be a “Speed Trap.”
When peer review is compressed into days, the “gatekeeper” function breaks down. In the case uncovered by Oviedo-García, a “dubious review report” published alongside a clinical article became the thread that unravelled the entire network. It revealed reviewers who were more interested in boosting their own H-indices than in verifying the science.
In a bid to maintain its standing, MDPI has initiated several corrective measures:
While MDPI’s willingness to investigate its own flaws is a step toward transparency, the incident underscores the core philosophy of the RI² Index: we cannot rely on a publisher’s “Internal Audit” alone. We need independent, institutional-level metrics that flag these anomalies before they infect the global record.
The “MDPI Case” is no longer just about one publisher; it is about the S-Rate (Self-Citation Inflation) and the D-Rate (Delisted Journal Exposure). As major databases like Web of Science begin to delist high-impact MDPI titles (such as IJERPH), universities are realising the high cost of ignoring integrity risks.
The Oviedo-García investigation, and the subsequent “Reviewer Mill” scandal, proves that the academic community can no longer outsource its ethics to a static list of indexed journals.
The “Reviewer Mill” incident is a wake-up call. It proves that the “Human Lens” — the psychological and professional incentives of reviewers and editors — is just as important as the data itself. At ResearchFace, we will continue to use the RI² Index to monitor these structural vulnerabilities, ensuring that “Prestige” is something earned through rigour, not manufactured through a mill.
Editor’s Note: This article has been updated to reflect the formal correction of the Oviedo-García (2021) study and the March 2024 MDPI “Reviewer Mill” investigation results. ResearchFace remains committed to the most accurate and corrected version of the scientific record.
📚 Sources
[1] Oviedo-García, M. Á. (2021). Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI). Research Evaluation, 30(3), 405–419a. https://doi.org/10.1093/reseval/rvab020
[2] Oviedo-García, M. Á. (2023). Correction to: Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI). Research Evaluation, 32(2), 543. https://doi.org/10.1093/reseval/rvad014
[3] Slone, C. (2024, March 6). Addressing reviewer misconduct in scholarly publishing: MDPI’s proactive steps. Editors’ Café. https://editorscafe.org/details.php?id=1
They hold real PhDs, publish under their own names in legitimate journals, and moonlight as ghost-authors for the bespoke fraud market. This profile-driven investigation examines the labour force behind custom-baked research—underemployed academics, freelance statisticians, and specialist editors who have turned scientific writing into a gig economy. The question is no longer who is buying. It is who is building.