• Scalene
  • Posts
  • Scalene 12: ScienceCritAI / NIH say no / PeerReviewerGPT

Scalene 12: ScienceCritAI / NIH say no / PeerReviewerGPT

Humans | AI | Peer review. The triangle is changing.

I’m back from a few weeks of R&R in France, and my word there is a lot to catch up on. I’ve included a few extra stories as links at the end of this issue. Also, ALPSP and Peer Review Week are looming. I’ll be attending ALPSP and active in PRW - see the grey box at the end for more details. Rights, let’s crack on:

1st September 2024

// 1
The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery (+ a review)
arXiv.org - 15 Aug 2024 - 36 min read

Even as I was sailing across the Channel to Roscoff, this seismic paper appeared on arXiv. It has been much discussed by others by now, but if you’ve missed some of the hype, AI Scientist “generates novel research ideas, writes code, executes experiments, visualizes results, describes its findings by writing a full scientific paper, and then runs a simulated review process for evaluation” - something that even excites the Anthropic CEO.
https://arxiv.org/pdf/2408.06292

Interesting, but not strictly peer review newsletter material I hear you say. Maybe not, but I’m using this paper as an excuse to showcase one of my new favourite peer review report generators. Take a look at this report on the arXiv paper: https://billster45.github.io/ScienceCritAI/AI_Scientist_20240816_064207.html

I’ve been following the ScienceCritAI twitter account (and another related one) for some time, but recently the reports seem to have gotten much better. It seems somewhat shrouded in secrecy, but is using Gemini Pro 1.5 APIs to generate these reports. Not openly accessible yet either, but I’ll try to find out more and share with you all. 👀

// 2
The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process
NIH - 23 June 2024 - 2 min read

It’s disheartening to see the world’s major funder simply dismiss the role AI can play in evaluating grant applications, particularly on the grounds of privacy, which is a problem that can be easily worked around with judicious use of LLM or access mode (API vs web).
I see this as an opportunity for NIH. They could develop their own peer review LLM platform, seeded with examples of successful and unsuccessful grant applications, on their own secure servers. They could see who was using it and how the outputs were reflected in the reviewers reports. Hell, it may eventually make the review process quicker and less prone to human bias. It’s easy to say ‘no’ to something, but it’s better to adapt what people are clearly wanting to do already, but on your own terms.

https://grants.nih.gov/grants/guide/notice-files/NOT-OD-23-149.html
and an analysis of the same:
https://about.citiprogram.org/blog/nih-clarifies-prohibition-on-the-use-of-ai-tools-in-peer-review-processes/

// 3
ChatGPT identifies gender disparities in scientific peer review
eLife - 03 Nov 2023 - 25 min read

The results further revealed that female first authors received less polite reviews than their male peers, indicating a gender bias in reviewing. In addition, published papers with a female senior author received more favorable reviews than papers with a male senior author, for which I discuss potential causes. Together, this study highlights the potential of generative artificial intelligence in performing natural language processing of specialized scientific texts. As a proof of concept, I show that ChatGPT can identify areas of concern in scientific peer review, underscoring the importance of transparent peer review in studying equitability in scientific publishing.

CL: Interesting analysis of human-generated peer review reports, showing again that this isn’t the gold standard we should be aiming to replicate. The authors also suggest ways editorial offices can minimise gender disparity with a simple tweak to workflows. It might also be nice to consider a review report analysis tool which flags such biases within reports.

// 4
Peer Reviewer GPT
chatgpt.com - 01 Aug 2024 - 1 min read

I’m generally hesitant about promoting things which are developed and promoted by social media experts/influencers. Don’t know why, but in this instance it seems Razia Aliani has delivered something quietly amazing. The Peer Reviewer GPT allows you to upload your own manuscript to get pre-submission peer review feedback. I suspect she emphasises ‘your own work’ to dissuade people using it for other peer review purposes, but it is a great tool. The section-wise analysis of strengths and weaknesses is just at the right level of detail, and a handy overview and editorial recommendation at the end give you insights on how to improve the paper before submitting. Another one to watch.
https://chatgpt.com/g/g-jJxL7NT1A-peer-reviewer

// 5
ARIES: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews
arXiv - 06 August 2024 - 26 min read

It’s always exciting to see tools that are going to enable others to build smart solutions in the peer review space, and ARIES is just that: a database of review comments and their corresponding paper edits. It aims to facilitate research on understanding the peer review process and developing systems to assist researchers in responding to reviews.

ARIES captures this iterative process, providing a valuable resource for analyzing how scientists incorporate reviewer comments into their paper edits.

This dataset could enable the development of AI-powered tools to help authors more efficiently and effectively respond to peer reviews. By studying the patterns in how authors update their papers, researchers may also gain new insights into current systems and author behaviours.
https://arxiv.org/abs/2306.12587

And finally…
Well, I’m ending on another link dump to clear the decks and start afresh for the new school year. So here are some other stories that caught my eye over the last few weeks:

Let's do coffee!
It’s conference season again. I’m at ALPSP and FBF over the coming weeks, so please get in touch (reply to this email) if you want to say hi at either of those events.

It’s also Peer Review Week. I’m talking at each of these events:

24 Sept: 12:30 GMT - Envisioning a Hybrid Model of Peer Review: Integrating AI with reviewers, publishers, & authors - with Serge Horbach, Haseeb Irfanullah, and Marie McVeigh. Registration link to follow next week.

24 Sept 14:00 GMT - Roundtable discussion on AI & Peer Review, hosted by MDPI. Registration link to follow next week.

25 Sept 14:00 GMT - ISTME Webinar on Reviewer Burnout and AI. Registration link to follow next week(!)

I still have time on the 23rd or 26th if you are looking for a panellist for anything related to this newsletter.

Curated by Chris Leonard.
If you want to get in touch with me, please simply reply to this email.