• Scalene
  • Posts
  • Scalene 41: Trustworthy / xPeerd / Epistemicide

Scalene 41: Trustworthy / xPeerd / Epistemicide

Humans | AI | Peer review. The triangle is changing.

Greetings from sunny Marrakech, where this issue of the newsletter is being crafted despite extreme heat, poor internet, and, frankly, a little over-indulgence at an ill-advised pool party last night. Will I ever learn? Forgive the somewhat sparser nature of everything this week. I promise to be more sensible in future.

20th July 2025

1//
Toward Trustworthy Peer Review

Relational AI Ethics - 12 July 2025 - 6 min read

You won’t have been able to miss the conversation around prompt injections in preprints to attempt to influence subsequent peer review reports which rely on LLMs. Indeed, I was a part of that conversation myself. However, this is a nuanced take on the motivations behind the authors of those papers and how there are many unexplored avenues of regulated AI use that we are just not looking at right now:

…we believe the current framing of this issue as "manipulation" and "cheating" misses a crucial opportunity to address deeper structural challenges facing academic publishing today. Rather than focusing solely on exposing problematic behavior, we propose a more constructive path forward that acknowledges the complex realities of AI integration in research workflows while strengthening the ethical foundations of peer review.

2//
ReviewerZero & xPeerd

We’ve highlighted ReviewerZero in Scalene before, but they seem to be stepping up their AI offering recently:

Every academic knows the feeling: waiting months for mediocre feedback while reviewers juggle impossible workloads. Here's how ReviewerZero's purpose-built AI delivers structured critique, replicability predictions, and journal recommendations.

And in a similar vein, a new service (to me at least) is xPeerd from Knowdyn. Their website is excellent and identifies all the relevant pain points in peer review, and offers their automated review system as an answer:

3//
AI Reviewer

Github -July 2025 - 3 min read

I can’t remember how I came across this, but Jusi Aalto has given us something wonderful here:

This system implements a specialized multiagent architecture comprising five expert agents, each focusing on a distinct aspect of empirical research evaluation:

Theoretical Framing & Hypothesis Development Specialist - Evaluates theoretical grounding and hypothesis development
Empirical Identification & Methods Specialist - Assesses causal identification strategies and econometric rigor
Conceptual Clarity & Presentation Specialist - Ensures readability and presentation quality
Economic Significance & External Validity Specialist - Analyzes practical implications and generalizability
Paper Structure & Presentation Specialist - Verifies adherence to empirical research conventions

An Editor Agent synthesizes feedback from all specialists into a coherent, actionable review letter.

4//
Standardized to Death: AI, Academic Gatekeeping, and the Epistemicide of Marginalized Knowledge

Isaac Andrew Sanders - 16 July 2025 - 26 min read

An important read, and not something I’d really considered in too much depth previously. But we should acknowledge that training AI to deliver peer review reports based on previously published work (and reviews) is potentially troublesome:

The Nature headline stopped me cold: "AI is transforming peer review — and many scientists are worried." Published just months ago, the article revealed that artificial intelligence software is "increasingly involved in reviewing papers, provoking interest and unease." What appeared as an efficiency breakthrough felt like witnessing the automation of centuries-old violence. Major publishers like AIP Publishing are piloting AI tools for peer review, while others explore "various potential use cases for AI to strengthen peer review." The transformation is already happening.

5//
Canadian Association for Food Studies issue embracing guidelines for AI in peer review

I’ve not seen anything like this before, but kudos to the team behind CFS for the forward-thinking nature of accepting that some authors and reviewers use LLMs in the creation and review process - and rather than rule it out completely, they invite the authors and reviewers to explain how they used AI.

The landscape of artificial intelligence (AI) usage in scholarship and publishing is evolving. The editorial and management team of CFS/RCÉA acknowledges that AI tools can offer value in many scholarly processes, including ideation and concept refinement, data analysis, image/figure/table creation, audio/video production, linguistic translation, language/grammar correction, and identifying subject-specific resources. Nonetheless, AI tools also generate errors and falsehoods, can compromise the rights and licenses of authors and creators, and have themselves been created within historical biases, privileges, and prejudices. The environmental impact of the use of AI, including water and energy consumption, is also of significant ecological concern. Within this context, CFS/RCÉA accepts that AI tools may be used in the creation of material submitted to the journal, under the following conditions:

- The human authors whose names are attached to the submission must have played the primary role in the conception and construction of the content.

- A disclosure statement on the use of any AI tool(s) must be provided at time of submission.

This policy covers content submitted for publication as well as texts submitted by peer reviewers.

Let’s chat
Many of you may know I work for Cactus Communications in my day job, and one of my responsibilities there is to help publishers speed up their peer review processes. Usually this is in the form of 100% human peer review, delivered in 7 days. However, we are now offering a secure hybrid human/AI service in just 5 days. If you want to chat about how to bring review times down with either a 100% human service, or you’re interested in experimenting with how AI can assist, let’s talk: https://calendly.com/chrisle1972/chris-leonard-cactus

Curated by me, Chris Leonard.
If you want to get in touch, please simply reply to this email.