- Scalene
- Posts
- Scalene 51: 100% / open review / negativity problems
Scalene 51: 100% / open review / negativity problems

Humans | AI | Peer review. The triangle is changing.
Welcome to 2026 - a year which I think will radically change how we evaluate research and provide it with trust markers. Not peer review, but something different. Beyond that, maybe we finally start thinking about communicating quanta of research in formats other than narrative articles. Part of that will inevitably bring into question the focus on journals (which our first story below also does) and when I look back at how AI, peer review, and publishing is changing, there seems a real opportunity for scholars, librarians, and publishers to redefine how we share research. Who will define the future of research communication?
5th January 2026
1//
100% AI-Reviewed Preprints are the Future of Open Research
Trends in Scholarly Publishing - Dec 2025 - 15 min read
A drum I have been banging for some time now is that full AI-reviewer capabilities are going to fundamentally change the publishing landscape. When an author has access to the same tools a reviewer or publisher has, they would be well-served to evaluate their own manuscript and iterate on (or revise) it based on the automated assessment. They have the ability to make their manuscript as good as it can be - in theory.
But what then? Are they going to submit it to a journal and pay $x000 APC to see it assigned a DOI and be available in perpetuity? Or are they going to upload the manuscript (and possibly reviews and draft history) to a preprint server, where the same functionality is free, and they can be reasonably sure a peer review process would add marginal value?
I thought I was the only one, but Haseeb Irfanullah proves otherwise. This great opinion piece which I hope you read to appreciate that whilst this may not be an option in January 2026, it may well be very soon.
2//
Most peer reviewers now use AI, and publishing policy must keep pace
Frontiers - 15 Dec 2025 - 3 min read
One bugbear of mine is companies releasing excellent & fascinating reports just before major public holidays. Frontiers, take note! As a result, I only got round to reading this a few days ago, and I wish I’d spent Christmas Day reading this because it’s full of insights that underscore some ‘vibes’ with real data.
The white paper describe survey results from 1,645 researchers worldwide and their attitudes to ‘the quiet revolution’ of AI tools in peer review. Over half of researchers use AI tools (a loose definition) in peer review, but that figure is markedly higher among ECRs and researchers in rapidly growing research regions such as China and Africa.
There is a scale of accepted use emerging. Many researchers use AI to draft reports or summarise papers. Some think it should be used for rigor, reproducibility, and deeper insights. What is common across most reviewers is an appetite for guidance and best practice.
Linking to blog post of summaries - but full text is available from this link too: https://www.frontiersin.org/news/2025/12/15/most-peer-reviewers-now-use-ai-and-publishing-policy-must-keep-pace
3//
Published peer review reports have higher informative content than unpublished reports
J. Informatics - March 2026 - 14 min read
My interest in peer review and AI stems from a very real drive to help authors get published more quickly. We don’t need to wait 5 months for a minor revision decision any more. But also, what does an optimal AI evaluation look like? Do we mimic existing review templates, or is there a better way to do this in 2026?
Thankfully, a message from the future (March 2026 - voorlopers in action) suggests slight modifications to make sure that there is maximum value from traditional peer review reports. Namely that:
Published peer review reports are more informative than unpublished reports
Reviewers outside EU/North America/Australia write less readable reports.
All-female and mixed reviewer teams produce more consistent information.
Results suggest open peer review may lead to more constructive reports.
I’m interested in extrapolating what this means for non-human evaluation of research, and propose that open AI review is probably key to getting acceptance from its community.
4//
The negativity crisis of AI ethics
LinkedIn - 20 Dec 2025 - 5 min read
Not strictly peer review related, but something I have also noticed in discussing AI amongst researchers and publishers: AI ethics has a structural negativity problem.
Three institutional factors—subject matter, norms against praising technology, and publication incentives—push scholars toward criticism. This leads to one-sided and exaggerated portrayals of AI risks.
The post is good, but also read the comments:
https://www.linkedin.com/pulse/negativity-crisis-ai-ethics-donald-farmer-hv2mc/
5//
AI is inventing academic papers that don’t exist - and they’re being cited in real journals
Rolling Stone - 17 Dec 2025 - 4 min read
It always amuses me to see our little corner of the world highlighted in the popular press. But this piece in Rolling Stone is not amusing and must give people who are not familiar with our world, a very bad impression.
Since LLMs have become commonplace tools, academics have warned that they threaten to undermine our grasp on data by flooding the zone with fraudulent content. The psychologist and cognitive scientist Iris van Rooij has argued that the emergence of AI “slop” across scholarly resources portends nothing less than “the destruction of knowledge.”
But AI can’t take all the blame. “Bad research isn’t new,” Moser points out. “LLMs have amplified the problem dramatically, but there was already tremendous pressure to publish and produce, and there were many bad papers using questionable or fake data, because higher education has been organized around the production of knowledge-shaped objects, measured in citations, conferences, and grants.”
It’s surely time for someone to invent a free reference list checker, that resolves references to papers and reformats them to boot?
And finally…
On the subject of AI slop, Holden Thorp published an editorial on this recently in Science, which seemed a uncontroversial stance, but has not been universally well-received.
The LSE blog has highlighted their best posts on Research in the Age of AI from 2025. I had never considered the issue of retracted articles in LLM training data before reading these.
This should be a main story in this week’s email, apologies to Maureen McGargill et al. for relegating it to the link round-up section, but it’s definitely worth clicking on this: Discovery Stack Pilot: Feasibility and Outcomes of a Scientist-Designed Peer Review Model Separating Quality and Impact
This is very long, and very meta, but if you’re reading this far down, very probably your kind of thing. Shreyas Doshi on The Problem with Peer-reviewed Studies on Human Behavior (and a Wiser Solution)
Where Is All the A.I.-Driven Scientific Progress? asks Rachel Cohn in the New York Times.
Let’s chat
No specific conferences planned anytime soon - but always happy to chat, email, meet - just reply to this email.
Curated by me, Chris Leonard.
If you want to get in touch, please simply reply to this email.