• Scalene
  • Posts
  • Scalene 8: ChatGPT as reviewer / PEER / Structured reviews

Scalene 8: ChatGPT as reviewer / PEER / Structured reviews

Humans | AI | Peer review. The triangle is changing.

Back to normal this week, although with multiple sports-related distractions. I’m excited to share what’s new in the space, so let’s go…

14th July 2024

// 1
Assessing ChatGPT's ability to emulate human reviewers in scientific research: A descriptive and qualitative approach
sciencedirect.com - 01 July 2024 - 5 min read (summary)

We included the first submitted version of the latest twenty original research articles published by the 3rd of July 2023, in a high-profile medical journal. Each article underwent evaluation by a minimum of three human reviewers during the initial review stage. Subsequently, three researchers with medical backgrounds and expertise in manuscript revision, independently and qualitatively assessed the agreement between the peer reviews generated by ChatGPT version GPT-4 and the comments provided by human reviewers for these articles. The level of agreement was categorized into complete, partial, none, or contradictory.

https://doi.org/10.1016/j.cmpb.2024.108313

CL - I love this experimental design which compares 3 human reviewer opinions with that of a ChatGPT4-generated report for manuscripts submitted to a certain journal. As expected, there are strong areas of overlap, and other areas where ChatGPT4 agrees less with the human reviewers. I’d like to see this repeated with 4o and Claude 3.5, but right now my takeaway is that a hybrid approach is likely to lead to optimal reports, combining things that humans missed, and using humans to weed out irrelevant or incorrect assertions from the LLM.

// 2
Insights 2024: Attitudes toward AI
Elsevier - July 2024 - 55 min read

Elsevier have been on a bit of a roll recently with timely thought leadership pieces encompassing the opinions of researchers, university administrators, and funders. Their latest examination of attitudes to AI in research is–as its name suggests–insightful, although I was a little confused as to some of the answers around AI and peer review.

A previous study (Research Futures 2.0) suggested that 21% of researchers would read an article which had undergone exclusively AI peer review as they expect it to be more free of bias and consistent across all submitted manuscripts - but 59% strongly disagreed, valuing human understanding above any AI process.

Now, however, it seems 93% of respondents believe that AI will bring benefits to publication process in the authoring, reviewing, and impact monitoring aspects of the work. Attitudes seem to be changing, all be it with the sensible caveat that AI’s influence on a review report should be noted.

Link to full report below. There are some excellent links at the bottom of the page (Databooks) which delve into the data more deeply and granularly (if that’s a word?).
https://www.elsevier.com/en-gb/insights/attitudes-toward-ai

// 3
PEER: Expertizing Domain-Specific Tasks with a Multi-Agent Framework and Tuning Methods
arXiv.org - 10 July 2024 - 24 min read

High performance requires sophisticated processing techniques, yet managing multiple agents within a complex workflow often proves costly and challenging. To address this, we introduce the PEER (Plan, Execute, Express, Review) multi-agent framework. This systematizes domain-specific tasks by integrating precise question decomposition, advanced information retrieval, comprehensive summarization, and rigorous self-assessment.

CL: I love the use of multiple agents here to determine how to answer the question and then decide if the answer is good enough. If not, it goes through another iteration.
https://arxiv.org/abs/2407.06985v2

// 4
Viewpoint: the evolving landscape of peer review
Emerald.com - 12 March 2024 - 22 min read

This review was recommended to me recently and I finally got round to reading it this week. It’s a great examination of the current state of peer review more broadly, but struggles with answering it’s own questions on ChatGPT as a reviewer for academic articles.

// 5
Structured peer review: pilot results from 23 Elsevier journals
PeerJ - 25 June 2024 - 20 min read

I’ve been delaying commenting/posting on this study for my own reasons, but it should be of interest to anyone who wants to speed up and unbias peer review.

Structured peer review consisting of nine questions was piloted in August 2022 in 220 Elsevier journals. We randomly selected 10% of these journals across all fields and IF quartiles and included manuscripts that received two review reports in the first 2 months of the pilot, leaving us with 107 manuscripts belonging to 23 journals. Eight questions had open-ended fields, while the ninth question (on language editing) had only a yes/no option. The reviews could also leave Comments-to-Authorand Comments-to-Editor. Answers were independently analysed by two raters, using qualitative methods.

CL: So what is my hesitation here? Nothing to do with the study, which is a solid piece of work, but rather with the idea of homogenising peer review reports. It’s my belief that peer review will become the discerning service between journals in the future, and journals who don’t offer something different to their rivals, risk losing submissions. If your peer review service and report was better and different than your rival, that’s something that will attract more submissions, no? Maybe mid-tier journals within a publisher portfolio would benefit from this for cascade purposes, but not the top-level journals.

And finally…
So, England made it to the final of Euro2024, and it’s the topic of conversation amongst most people on the streets right now. But I’m not going to jinx anything by mentioning anything about it. Instead I’m sharing this image, which is something I have to fight when writing this newsletter too!

Let's do coffee!
I’m travelling to the following places over the next few weeks. Always happy to meet and discuss anything related to this newsletter. Just reply to this email and we can set something up:

Birmingham: 29th July
Leeds: 8th August
Oxford 9th August
ALPSP (Manchester): 11-13 September

Curated by Chris Leonard.
If you want to get in touch with me, please simply reply to this email.