- Scalene
- Posts
- Scalene 15: PeerArg / User perspectives / Rise & Fall
Scalene 15: PeerArg / User perspectives / Rise & Fall
Humans | AI | Peer review. The triangle is changing.
Anyone else had enough of peer review news after this last week? No, me neither. So let’s look at a few things you may not have seen in the last 7 days, and some you almost certainly have.
29th September 2024
// 1
PeerArg: Argumentative Peer Review with LLMs
arXiv - 25 September 2024 - 27 min read
We introduced two approaches, PeerArg and an end-2-end LLM, to enhance the peer reviewing process by predicting paper ac- ceptance from reviews. In contrast to the end-2-end LLM that uses few-shot learning techniques to predict paper acceptance in a black-box nature, PeerArg adopts both methods from LLM and computational argumentation to support a decision in the peer re- viewing process. Our experimental results show that PeerArg can outperform the end-2-end LLM, while being more transparent due to the interpretable nature of argumentation.
CL: For reasons I won’t go into right now, I’m writing this from the concrete show ring of an actual cattle market in North Yorkshire. It’s freezing, so I must admit I’ve not read this thoroughly yet, but it looks very promising for its approach.
// 2
If generative AI accelerates science, peer review needs to catch up
LSE Blog - 25 Sept 2024 - 5 min read
At the end of a week where peer review has been scrutinised to within an inch of its life, and then scrutinised some more, there have been relatively few viewpoints from the near future. Many webinars I attended this week stressed the (current) incapability of AI to match human insights. This post posits an interesting reversal of that:
Publishers must now adapt and innovate just as they did during the shift from print to digital at the end of the 20th century. However, peer review presents a challenge to these visions. 100 million hours were estimated to be spent on peer review in 2020, a figure that could rise exponentially if reviewers are not supported. Given that the current system is already viewed by some as working at capacity, Lisa Messeri and M J. Crockett have argued an AI-enabled ‘science-at-volume’ could lead to the ‘illusion of understanding’, whereby a significant escalation in scientific productivity and output is not matched by human insight and judgement.
// 3
Innovation and Technology in Peer Review: Some User Perspectives
The Scholarly Kitchen - 26th September 2024 - 10 min read
The Scholarly Kitchen did a stellar job of covering this year’s theme of Peer Review Week. And while it would be gauche of me to highlight my own contribution, Friday’s post was one of those multi-viewpoint articles that TSK do so well. A mix of researchers, editors, publishers weighed in how they see peer review evolving in the near term. I’d love to see what funders think of this too, but the insights from these users is a good temperature gauge for current attitudes in the industry.
https://scholarlykitchen.sspnet.org/2024/09/26/innovation-and-technology-in-peer-review-some-user-perspectives/
// 4
Recommendations on the Use of AI in Scholarly Communication
EASE - 26 Sept 2024 -8 min read
EASE published an excellent set of guidelines this week with specific and sensible guidelines for authors, publishers, and reviewers. However, they also acknowledge that publisher policies on this matter can lack a certain bite:
Any AI peer review policy should be highlighted in the review invitation emails and in the online submission platform. We are aware that several journals, publishers, and funding agencies have prohibited the use of AI tools by peer reviewers (e.g., Royal Society, National Institutes of Health, Elsevier) due to potential risks of bias, confidentiality concerns, and their unproven accuracy, effectiveness, and reliability. However, such bans are hard to implement, and it is not clear what, if any, repercussions for their use will be.
It really is hard to see an effective way to govern the use of AI by peer reviewers, especially if they take it as the input for a review written in their own hand, but it’s a conversation worth having.
https://ease.org.uk/communities/peer-review-committee/peer-review-toolkit/recommendations-on-the-use-of-ai-in-scholarly-communication/
// 5
The rise and fall of peer review
Experimental History - 13 Dec 2022 - 13 min read
OK, I’m going there. Classic post alert.
I couldn’t help but come back to thinking about this influential blog post during PRW. Adam Mastroianni is a great writer, and here he puts his keyboard to great use to question the value of peer review at all. Pretty much all papers get published eventually, so it’s not acting as filter on the corpus as a whole. So is it worth the effort we spend on it to only marginally improve the vast majority of papers? If you only read one thing from the newsletter this week, make it this. I keep coming back to it and finding new things to agree with:
And finally…
Apologies for the somewhat briefer nature of the newsletter this week. It’s driven by an acknowledgement that you might be all peer-reviewed out this week, a short battery life on my laptop, and the real possibility of me getting frostbite in my fingers (photo courtesy of my wife, who later took pity on me and bought me a hot coffee to hold in my hands). Back to normal next week.
Let's do coffee!
It’s conference season again. I’m setting up meetings at STM & Frankfurt Book Fair over the coming weeks, so please get in touch (reply to this email) if you want to say hi at either of those events.
Curated by Chris Leonard.
If you want to get in touch with me, please simply reply to this email.