To find out more about the podcast go to Fraud, AI slop and huge profits: is science publishing broken?.
Below is a short summary and detailed review of this podcast written by FutureFactual:
The Crisis in Scientific Publishing: Open Access, Special Issues and a Path to Trustworthy Science
Science Weekly investigates the crisis in scientific publishing, where a surge in papers, publish-or-perish incentives, and shifting business models collide with rigorous quality control. The episode explains why the system underpins scientific progress, yet leaves researchers stretched thin and the literature crowded with mediocre or fraudulent work. It traces the historical drivers from Maxwell’s era of journal expansion to the modern open-access cashflow, where open-access charges and subscription profits shape behavior. It also highlights new threats such as paper mills, citation cartels and AI-generated fakes, and why governments and funders are pressing for reform. Finally, it sketches a utopian approach in which attention and impact drive peer review, preserving trusted journals for high-signal science and rebuilding trust in evidence.
Introduction: publishing as the backbone of science
Publishing is the final step of a scientific project, a formal record of methods, data and interpretation that lets others replicate and build on findings. In this episode, The Guardian's Madeleine Finlay and Ian Sample explain how peer review and editorial oversight have long safeguarded the scientific record, and how the system is now under unprecedented strain as the pace and volume of publishing explode.
"the scientific publishing system and the academic job market are a bit broken." - Madeleine Finlay
The program notes there are about 50,000 peer-reviewed journals worldwide, with roughly 8–9 million researchers. Yet the research output has surged from around 2 million articles per year in 2016 to about 3 million in 2022, outpacing the growth in the scientific workforce and placing new signals in a crowded literature. This mismatch creates information overload, makes peer review more demanding, and shifts much of the burden onto researchers who already juggle multiple roles.
Volume, value, and the hidden costs
As the transcript details, more papers mean more editing, reviewing, and maintaining quality control, much of which is done by scientists themselves in volunteer roles. A 2020 estimate suggested scientists donated peer-review time worth over a billion dollars of economic activity. The sheer quantity of articles has given rise to shorter-term projects, increasing the risk that overall quality gets diluted as researchers chase quantity to advance careers in a “publish or perish” job market.
It also examines how the journal ecosystem has evolved since the pre-digital era into a modern, profit-driven industry. Maxwell’s strategy of launching lots of new journals created a subscription-based market where libraries paid for access; the internet later removed many physical constraints, enabling unlimited publication. Open access emerged as a response to affordability concerns, but the new model links payment to visibility, potentially incentivizing more articles to be published to maximize revenue, not necessarily to advance science.
Origins and incentives: incentives driving up publishing
The discussion turns to why scientists publish so much. The job market and funding panels often use publication counts and prestige as proxies for quality, reinforcing a cycle of relentless output. The rise of “special issues”—collections organized around a theme—was once meant to direct a field, but some publishers flooded the system with thousands of such issues, diluting impact and complicating evaluation processes.
"Special issues, as the name implies, should be a once in a while thing." - Ian Sample
Open-access economics further complicates incentives: while opening articles to readers worldwide is commendable, the cost per article to publish openly can range from a few thousand to over ten thousand pounds, a burden that can influence where and how research is published. The combination of subscription profits, open-access charges, and the tenure pressure to produce is a potent mix that reshapes scholarly communication in ways not always aligned with best scientific practice.
Fraud, manipulation, and the new threats
The program flags growing threats to the integrity of the scientific record. Paper mills produce fake or questionable studies for payment and exploit gaps in checks, while citation cartels attempt to inflate influence by orchestrating artificial citations. AI tools, when misused, can generate or nudge non-genuine papers into the literature. Hijacked journals and resurrections of defunct titles illustrate how the system can be gamed, undermining trust in published science.
Ian Sample emphasizes that this is not a fringe problem but a systematic one: "publication and quality control are being distorted by a lot of embedded pressures and malpractices" (paraphrased from the discussion about how the system is becoming a machine for misuse).
A utopian fix: Mark Hanson’s attention-driven peer review
The podcast presents a provocative idea from Mark Hanson for reform. Instead of reviewing every paper immediately, an attention metric would flag preprints that are gaining significant discussion and usage as candidates for peer review. A preprint might be published openly, but its validity would be trusted more once it accumulates citations and practical impact, triggering formal review. This approach would preserve the value of traditional journals for high-signal work while reducing the burden of evaluating everything that appears online.
"We could use an attention metric that determines when something should be nominated for peer review" - Mark Hanson
What needs to change: roles for funders, publishers, and researchers
The conversation underscores that no single fix will suffice. Publishers stress they are adopting AI to detect fraud and to improve screening, but the core issue is how researchers are evaluated and funded. Funders could refuse support for certain special issues or insist on more stringent quality controls, while hiring and promotion processes could shift focus toward the intrinsic quality of the science rather than publication counts. The episode argues that a multi-stakeholder effort is required, with researchers, funders, publishers, and policymakers aligning incentives to strengthen rather than degrade the reliability of published science.
In closing, the hosts remind listeners that scientific journals are the backbone of scientific progress and public trust. The stakes extend beyond academia: reliable science informs policy, health, and the public's understanding of new discoveries. The promised path forward depends on addressing root causes—career incentives, funding structures, and governance of publishing—so that trust and progress can be restored in the system that underpins modern science.
"we need to get to the point where researchers are getting jobs and are getting promoted on the quality of the science they do, not on the number of publications they've racked up." - Ian Sample