- Scalene
- Posts
- Scalene 49: REACH / ignored / DOCUEVAL
Scalene 49: REACH / ignored / DOCUEVAL

Humans | AI | Peer review. The triangle is changing.
.
21st November 2025
Not much in the way of interesting images this week, so we’re looking at a very text-based update this time. Winter is biting the north of England where I find myself at the moment, and it is reminding me we are not far away from the end of the year. I will be happy to meet up with anyone attending the STM Integrity days in London on 9th or 10th December. Let me know by responding to this email. OK, onwards!
1//
REACH: Rethinking peer review in the AI era.
REACH - Oct-Dec 2025
A publication which had sailed under my radar suddenly appeared on it recently. REACH is a quarterly magazine form the Science Integrity Alliance and this appears to be only its second issue. However, from a Scalene point of view, it’s a very welcome addition to the discussion on how AI is going to save/kill peer review. I very much enjoyed the reflective piece by Daniel Ucko - but it’s all worth a look (just don’t try and read it on your phone - save it for a desktop):
2//
Why Peer Review Invitations Get Ignored: Insights from Both Sides of the Inbox
Prophy - 13 Nov 2025 - 7 min read
It’s worth looking into one of the reasons why AI is going to be needed to help review manuscripts, and that’s because a decreasing proportion of submitted manuscripts can expect to get two human reviewers to agree to review them. Thousands of review invitations are ignored because they feel generic and irrelevant. Reviewers decide quickly whether they want to engage with a review request based on journal, topic fit, and (crucially) invitation quality. Better personalization shows why a reviewer is a good match and boosts acceptance - and this can be achieved by using semantic tools to scale meaningful personalization.
3//
DOCUEVAL: An LLM-based AI Engineering Tool for Building Customisable Document Evaluation Workflows
arXiv - 12 Sept 2025 - 13 min read
Usually arXiv papers look at theoretical aspects of incorporating AI into peer review, using equations like mathematical camouflage. However this is a simpler read and has some real takeaways that could be used IRL. I particular liked the customisation of the workflows, allowing scoring or narrative reviews alongside different reasoning strategies. Take a look for yourself.
Claude assessment of this preprint: https://claude.ai/public/artifacts/e73b25ef-09cc-4a9a-b04e-d3c80a88e839
4//
AI-Powered Citation Auditing: A Zero-Assumption Protocol for Systematic Reference Verification in Academic Research
arXiv - 17 Oct 2025 - 16 min read
It’s becoming clear to anyone who handles submissions to academic journals that the quality of referenced works is dropping. It has often been the case that authors cite works they have never read, but now they are citing papers that often don’t exist, or if they do - they are completely irrelevant to the work being described. The human labour involved in checking each reference means that it is impractical to do so, so AI assistance in this endeavour is expected to be welcome.
I’m not sure what the guys at Scite.ai or Veracity would make of this, but the low error rates and relatively speedy processing times mean this will be of interest to editorial offices everywhere.
Claude assessment of this preprint: https://claude.ai/public/artifacts/6f38ff9a-11dc-436c-862f-d10047e17967
5//
AI "peer" review - the impact on scientific publishing
evocellnet blog- 18 Nov 2025 - 5 min read
Pedro Beltrao looks at q.e.d. and Nature Research Assistant with three papers from his lab. I’ll let you read what he has to say on each, but it’s the summary that caught my eye:
AI "peer" review is here to stay. Whether we want it or not, these tools are now reaching a point where they can be used to identify gaps in a scientific manuscript that could pass as a human (peer) review report. There are many ways these tools can be used and abused. The most positive outcome of this might be that authors take advantage of these as assistants to help improve the clarity of the manuscripts before making them public. The most obvious negative outcome is that these will be used as lazy human reviewing just copy-pasted to satisfy the ever growing need to peer-review our ever growing production of scientific papers every year.
1 year ago: Scalene 21 - 17 Nov 2024
Let’s chat
I’ll be at the STM London meeting on both days in early December. Come and say hi.
Curated by me, Chris Leonard.
If you want to get in touch, please simply reply to this email.
