• Scalene
  • Posts
  • Scalene 6: AgentReview / RelevAI-Reviewer / Video

Scalene 6: AgentReview / RelevAI-Reviewer / Video

Humans | AI | Peer review. The triangle is changing.

There have been some great papers on applied AI (as it pertains to peer review) appearing on arXiv recently. I highlight some of them here, but I am also planning a future issue focussing more on people and workflows and less on the underlying technology. After all, Scalene describes a triangle, not a dot. However, most recent papers that I have found interesting are also acknowledging the role humans play in the process. Read on to find out more.

23rd June 2024

// 1
AgentReview: Exploring Peer Review Dynamics with LLM Agents
arXiv.org - 18 June 2024 - 34 min read

AgentReview is open and flexible, designed to capture the multivariate nature of the peer review process. It features a range of customizable variables, such as characteristics of reviewers, authors, area chairs (ACs), as well as the reviewing mechanisms. This adaptability allows for the systematic exploration and disentanglement of the distinct roles and influences of the various parties involved in the peer review process. Moreover, AgentReview supports the exploration of alternative reviewer characteristics and more complex review processes. By simulating peer review activities with over 53,800 generated peer review documents, including over 10,000 reviews, on over 500 submissions across four years of ICLR, AgentReview achieves statistically significant insights without needing real-world reviewer data, thereby maintaining reviewer privacy. AgentReview also supports the extension to alternative reviewer characteristics and more complicated reviewing processes. We conduct both content-level and numerical analyses after running large-scale simulations of the peer review process.

https://arxiv.org/html/2406.12708v1

CL - In what I hope is a new trend in peer review/AI research, we are seeing that researchers are factoring in the social and human dimension to peer review analysis. It is also the first paper I have read to include relevant use of emojis 😻. This is more a review of peer review processes than an examination of peer review report generation, but fascinating to see the ways in which papers can be improved (direct reviewer-author discussion, for one).

// 2
RelevAI-Reviewer: A Benchmark on AI Reviewers for Survey Paper Relevance
arXiv.org - 18 June 2024 - 24 min read

Our findings underscore the potential of incorporating AI to augment the traditional, manual peer-review process, offering a promising avenue toward reducing human bias and time constraints. The core contribution of this work is the development of an AI model capable of effectively ranking scientific manuscripts in terms of relevance, leveraging a novel dataset specifically curated for this purpose.

CL - The breadth of subject areas examined here makes this work exceptional. Most work in this space (see first story above) is derived from computer science conference reviewing. The authors here have broken out of that mould and applied their work to fields such as Chemistry, Law, Linguistics, and Geology, to give four examples. The limitations of this work are significant (and acknowledged by the authors) but the direction of travel gives cause for excitement.

// 3
Generative AI is spamming up academic journals
TechCrunch - 19 June 2024 - 5 min read

This is a decent look at the ‘input’ side of peer reviewing, the submission of gen-AI generated papers and journals which publish them, and exclusively only them. CiteScore gets some undue attention for ranking some of these journals highly (due to strong cross-citation behaviour between nefarious journals), but the real news is that this is covered in TechCrunch. The wider world is watching how we deal this situation/mess.
https://techcrunch.com/2024/06/19/this-week-in-ai-generative-ai-is-spamming-up-academic-journals/

// 4
How arXiv is planning to revolutionise peer review
LinkedIn - 20 June 2024 - 1 min read

I promised more content about the human and workflow aspects of peer review, and then stumbled across this in my LinkedIn feed. I was curious enough to click through and explore and found this appears to be run by ScienceCast. As we saw in the first story in today’s newsletter, author-reviewer interactions are important and this may help. One to watch (click image below to see post).

// 5
Why we need AI assistance (at least)
SpringerLink - 12 June 2024 - 8 min read

In a lament published by Advances in Health Sciences Education (SpringerNature) the editors plaintively ask “Where have all the reviewers gone?”

Because there are so many unacknowledged review requests, rather than inviting two individuals to review at a time, many handling editors now send out multiple invitations with the hope that they will get at least a few ‘hits’, thus increasing the number of requests and likely therefore contributing to those invited to review feeling overwhelmed. Indeed, it can seem to be a bit of an arms race– the more that invitees do not respond to review requests the more individuals are invited to review. The one factor that would be most likely to resolve the problem, that of building greater capacity in academic communities, is something that journals can do little to bring about.

https://link.springer.com/article/10.1007/s10459-024-10350-2

CL - I think these editors are voicing the reality for the majority of journals these days. Reviewers are hard to get. 2-3 reviews for each paper is not scalable for our current rate of manuscript generation. We need to think radically about how to alleviate this and a part of that solution is going to involve AI.

And finally…
I would normally recommend something this long as an ‘And finally…’ and I must confess I only listened to this on the Reader app, but it made me chuckle when I was walking the dog. I’m sure many of you recognise at least some of the things this guy is talking about:
https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/

Need something shorter and even more provocative? I got ya:
https://chris-said.io/2024/06/17/the-case-for-criminalizing-scientific-misconduct/

Let's do coffee!
I’m travelling to the following places over the next few weeks. Always happy to meet and discuss anything related to this newsletter. Just reply to this email and we can set something up:
Berlin: 26-27 June
London: 28 June
Oxford: 10 July

Curated by Chris Leonard.
If you want to get in touch with me, please simply reply to this email.