```

Investigations & Insights

Forensic investigations spanning paper mills, citation cartels, institutional risk, and the human cost of misconduct — from published features to active case files.

◆ Published

Featured Investigations

📈 Methodology Deep-Dive

Measuring Institutional Risk: A Guide to the Research Integrity Risk Index (RI²)

Every university publishes research. But not every university checks where that research ends up. The RI² Index is a forensic framework designed to answer a question most institutions would rather avoid: how exposed are you to compromised science? By combining two key signals — how much of an institution’s output lands in delisted journals and how heavily its researchers rely on self-citation — the index assigns a risk tier that cuts through the noise of traditional rankings.

Read the Full Explainer →

🔎 Investigative Feature — Research Integrity

Custom-Baked Papers: Uncovering the Rise of Tailor-Made Research Fraud

Beyond paper mills — a forensic investigation into the “Shadow Faculty,” a distributed network of underemployed PhDs and freelance statisticians building bespoke fraudulent research to order. Featuring evidence from WeChat, WhatsApp, and Arabic-language networks, mapped against the RI² Research Integrity Risk Index.

Read the Full Investigation →

🔎 Forensic Feature — Citation Integrity

The Prestige Paradox: Inside the Circular Citation Economy and the Rise of “Reviewer Mills”

A journal’s place on an elite index used to mean something. But a forensic audit of one of the world’s largest Open Access publishers reveals a troubling pattern: inflated self-citation rates, compressed peer review timelines, and — most damning of all — a coordinated “Reviewer Mill” where referees were caught inserting forced citations into manuscripts they were supposed to evaluate objectively. When the gatekeepers become the manipulators, the entire currency of academic prestige is at stake.

Read the Full Investigation →

🔬 In Progress

Active Investigations

🔬 Investigation In Progress

The Blind Librarian: How Discovery Tools Are Accidentally Promoting Fraudulent Research

When a researcher types a topic into a bibliometric discovery tool, they trust that what surfaces is credible. But the algorithms behind platforms like VOSviewer, Connected Papers, and Litmaps have a critical flaw: they are completely integrity-blind. They read citation connections as signals of importance — even when those connections were manufactured by paper mills or inflated by citation cartels.

ResearchFace is investigating how these tools inadvertently give fraudulent papers prime real estate in literature maps, making them look central to a field when they should have been flagged or retracted. The result is a hidden amplification loop: compromised papers get discovered more, cited more, and embedded deeper into the scientific record — all without a single integrity warning reaching the researcher on the other end.

Status: Under analysis — visual comparative maps in development

🔬 Investigation In Progress

The Career That Doesn’t Add Up: Detecting Paper Mill Authors by the Shape of Their CV

Every academic career has a rhythm. Junior researchers start as co-authors, slowly earn lead positions, and build expertise over years. But a growing number of profiles are breaking that pattern entirely — appearing as senior authors from day one, publishing at rates that defy human research capacity, and accumulating credentials that look impressive on paper but collapse under scrutiny.

ResearchFace is developing a new detection approach that models what a normal academic career looks like across disciplines, and then flags the ones that don’t fit. By mapping authorship roles against career age, we can identify the statistical fingerprints of “authorship-for-sale” schemes and paper mill affiliations — not by reading every paper, but by reading the trajectory itself.

Status: Methodology validation in progress — benchmarking across 15+ disciplines

🔬 Investigation In Progress

The AI-Paper Mill Hybrid: The Birth of “Plausible Fraud”

The era of “copy-paste” plagiarism has evolved into a dangerous new phase: AI-generated data that looks, feels, and calculates like the real thing. As LLMs become capable of generating statistically plausible datasets, the scientific record faces a “deepfake” crisis that standard peer review is not equipped to handle. This investigation utilizes Behavioral Statistics and LLM Pattern Recognition to identify the “excessive smoothness” or lack of noise characteristic of synthetic social science data.

Status: Data verification phase

🔬 Investigation In Progress

Gaming the Rank: How Institutions 'Buy' Their Status

What happens when a university’s promotion committee rewards quantity over quality? Using the RI² framework, this investigation profiles institutions whose publication incentives may inadvertently create fertile ground for paper mill engagement and authorship-for-sale schemes. The focus is on the gap between institutional policy and actual research output patterns.

Status: Institutional data collection

🔬 Investigation In Progress

Citation Cartel Cartography: Mapping the Invisible Networks

Citation cartels operate in the shadows — groups of journals and authors systematically inflating each other’s metrics through reciprocal citation agreements. This investigation applies network analysis to identify unusually dense citation clusters that deviate from organic knowledge-sharing patterns, revealing the hidden architecture of metric manipulation.

Status: Network mapping phase

🔬 Investigation In Progress

Ghost Authors and Phantom Labs: The Identity Crisis in Research

They appear on dozens of papers, claim affiliations with prestigious institutions, and have publication records that suggest decades of productive work. But some of these authors may not exist at all. This investigation examines the growing phenomenon of fabricated author identities and institutional affiliations used to lend credibility to paper mill products.

Status: Identity audit phase

🔬 Investigation In Progress

The Retraction Debt: Why Corrections Take Years and Who Pays the Price

The average time between a misconduct allegation and a formal retraction is measured in years, not months. During this gap, flawed research continues to be cited, taught, and built upon. This investigation quantifies the “retraction debt” across disciplines and examines the structural bottlenecks — from journal reluctance to legal threats — that keep bad science on the books.

Status: Data analysis phase

🔬 Investigation In Progress

The Open Access Vulnerability: When the Business Model Conflicts with Rigor

The Article Processing Charge (APC) model creates a financial incentive to accept more papers. This investigation examines the correlation between APC-driven revenue growth and declining editorial standards across high-volume Open Access publishers, asking whether the “publish or perish” economy has created a parallel “pay and publish” pipeline.

Status: Economic modeling phase

🔬 Investigation In Progress

The Whistleblower’s Dilemma: The Human Cost of Speaking Up

Behind every misconduct investigation is a human being who decided to speak up — often at enormous personal cost. This feature profiles the psychological, professional, and legal realities faced by research integrity whistleblowers, examining why the system punishes those who protect it.

Status: Interview & research phase

🔬 Investigation In Progress

The Global Heat Map: Regional Vulnerabilities in Research Integrity

Research misconduct is not evenly distributed. Certain regions face unique pressures — from extreme publish-or-perish incentives to limited institutional oversight — that create hotspots of vulnerability. This investigation applies the RI² framework at a regional level to identify systemic patterns and propose targeted interventions.

Status: Regional data compilation

🔬 Investigation In Progress

The Peer Review Breakdown: From Gatekeeper to Rubber Stamp

Peer review is supposed to be the gold standard of scientific quality control. But what happens when review turnaround times drop below what is humanly possible? This investigation audits review timelines across high-volume publishers to identify journals where the peer review process may have collapsed into a formality.

Status: Timeline audit phase

🔬 Investigation In Progress

The Lifecycle of a Lie: Tracing the Path of Fabricated Data

A single fabricated data point in a 2015 paper can infect the scientific record for over a decade, spawning hundreds of citations and derivative studies. This “biography of a fraud” uses Scientometric Tracking to follow the lineage of one high-profile retracted paper, showing how its “zombie citations” continue to mislead the scientific community years after the retraction was finalized.

Status: Case study in progress