- Scalene
- Posts
- Scalene 2: Assisting peer review with LLMs
Scalene 2: Assisting peer review with LLMs
Humans | AI | Peer review. The triangle is changing.
Welcome to the second issue of Scalene. I’m anticipating lots of news coming out of the SSP conference over this coming week, but before that there is much to catch up on regarding the interplay of AI & peer review, so let’s take a look at some developments from the the last few weeks.
26th May 2024
EASE panel discussion: Assisting Peer Review with Technology and Large Language Models
If you missed this, you missed out - but you can watch the discussion again thanks to YouTube. This is a great examination of AI in peer review through the eyes of three different experts. I particularly enjoyed Mike Thelwall’s look at evaluating research ‘quality’ using ChatGPT.
https://www.youtube.com/watch?v=hPOxK8K3xoY
The AI Review Lottery: Widespread AI-Assisted Peer Reviews Boost Paper Scores and Acceptance Rates
Definitely worth a quick read this week. How conference abstract submissions that received an AI-assisted peer review were 4.9 percentage points more likely to be accepted than submissions that did not.
https://arxiv.org/abs/2405.02150v1
What Can Natural Language Processing Do for Peer Review?
NLP offers the potential to enhance peer review by analyzing text-based artefacts such as manuscripts, but also reviews. The paper outlines challenges in NLP for peer review and calls for action to advance research in this area. NLP tools can assist with manuscript evaluation, review quality, and post-peer review analysis.
https://arxiv.org/abs/2405.06563v1
Researchers warned against using AI to peer review academic papers
It’s nice (and unusual) to read a balanced opinion piece on the use of AI in peer review, but that’s exactly what this is in Semafor. The headline of this piece is restricted to solely using AI to review abstract submissions to conferences such as NeurIPS, but even in this context there was some support for the idea of AI-supported peer review:
Liang said that some comments generated by ChatGPT are not too dissimilar from experts, and can raise some of the same issues in research that human reviewers would have flagged, too. In fact, he told Semafor that he asked the chatbot to critique his team’s paper and found that it highlighted some of the same points that human reviewers did.
Fascinating session at SSP this week
This looks like an amazing panel session at the upcoming SSP conference. Sven Fund, Teodoro Pulvirenti, and Jennifer Regala are all talking on AI-enhanced peer review. Friday afternoon at 1:30pm [disclaimer: this session is being moderated by my colleague Jay Patel - and we are both employed by Cactus Communications].
Elsevier announce upgrades to ‘Evaluate Manuscript’ functionality
This tool is aimed squarely at the triage process carried out by in-house editorial staff.
https://www.elsevier.com/connect/announcing-the-new-evaluate-manuscript
Decentralized Peer Review in Open Science: A Mechanism Proposal
Finally, although much of this newsletter is devoted to AI and the construction of peer reviews, I’m not averse to examining new ways of conducting peer review. This proposal uses tokenised incentives including smart contracts and non-fungible tokens, to increase engagement and transparency in the peer review process. It all makes sense, but feels like it has been tried before and I’m not sure the world is ready for it (yet. again).
https://arxiv.org/abs/2404.18148v1
More fun next Sunday.