- Scalene
- Posts
- Scalene 44: Peer Review Congress 2025
Scalene 44: Peer Review Congress 2025

Humans | AI | Peer review. The triangle is changing.
It’s been a while - and as we are now in conference season, it may be a while again until the next one - but I’m still here. And more specifically I’m in Chicago right now, basking in the afterglow of the 10th International Congress on Peer Review and Scientific Publication. It was 3 l-o-n-g days that might have been better over 4, at least for my jet-lagged mind - but I survived and am providing a ‘special issue’ of Scalene, collating a list of the more interesting presentations here. Given the increased blurring of the lines between research integrity and peer review, not everything is pure peer review and/or AI, but we start off with a cracker:
7th September 2025
1//
A Singular Disruption of Scientific Publishing—AI Proliferation and Blurred Responsibilities of Authors, Reviewers, and Editors
Isaac Kohane
The abstract linked below doesn’t really tell the full story of the presentation, which got me all excited for the final day of the conference. Understandable given these abstracts are submitted way in advance of the conference, and this was describing something very recent. Kohane described a peer review workflow for NEJM AI where they expedited a review of a manuscript in 7 days with one editor review and two AI reviews, and each of the three reviews was discussed by the editorial board. Speedy, human-AI hybrid review with expert oversight - what’s not to like? As an aside, Kohane also called for the introduction of professional AI-augmented reviewers, which is the thrust of my session at ALPSP next week!

2//
Quantifying and Assessing the Use of Generative AI by Authors and Reviewers in the Cancer Research Field
Daniel S. Evanko, Michael Di Natale
AI-generated text can be detected in manuscripts and reviewer comments with virtually no false positives.
3//
Paper Mill Use of Fake Personas to Manipulate the Peer Review Process
Tim Kersjes
There were a couple of presentations (other one linked below) where manipulation of the existing review process has evolved into complicated deceptions - which could possibly be mitigated by less reliance on volunteer reviewers.
How a Questionable Research Network Manipulated Scholarly Publishing
https://peerreviewcongress.org/abstract/how-a-questionable-research-network-manipulated-scholarly-publishing/
4//
Evaluation of a Method to Detect Peer Reviews Generated by Large Language Models
Vishisht Rao, Aounon Kumar, Himabindu Lakkaraju, Nihar B. Shah
An examination of the impact of prompt injections on the evaluation of PDF submissions to conferences and journals. Can it be used for good - in identifying AI use in the review process? Yes, but… there ways around this, such as using computer vision methods to parse the pdf again - but not everyone is going to do this.
5//
Quality and Comprehensiveness of Peer Reviews of Journal Submissions Produced by Large Language Models vs Humans
Fares Alahdab, Juan Franco, Helen Macdonald, Sara Schroter
LLM-generated reviews matched or exceeded human reviewers on a few key dimensions of review quality for BMJ journal submissions.
And finally…
Please take a moment to look at the posters too. The poster sessions were great - busy, interactive, and lots of great insights that deserve further investigation:
Hopefully I’ll see you at the ALPSP conference next week.
One year ago: Scalene 13, 08 Sept 2024
Let’s chat
I’ll be at the Peer Review Congress in Chicago in early September, and then ALPSP in Manchester shortly thereafter. Wanna meet?
Curated by me, Chris Leonard.
If you want to get in touch, please simply reply to this email.