- Scalene
- Posts
- Scalene 55: Abundance / Shoulders / whodunnit
Scalene 55: Abundance / Shoulders / whodunnit

Humans | AI | Peer review. The triangle is changing.
I was at the Researcher 2 Reader conference in London this week and one presentation stood out for me (and others) from my colleague Nikesh Gosalia. It was on the danger of applying AI models - trained on western, English-language content - to global research. I urge you to watch the presentation when it becomes available because it had me questioning a lot of presumptions I had made (or, more accurately, questions I hadn’t asked myself) about the true neutrality of LLMs in a worldwide context. I’ll include a link in a future Scalene when it’s live.
1st March 2026
1//
From Scarcity to Abundance: Academic Knowledge Production in the Age of Artificial Intelligence
SSRN - 25 Feb 2026 - 12 min read
The institutional architecture of modern academia rests on scarcity: research is costly to produce, slow to publish, and difficult to evaluate. Artificial intelligence is dismantling this logic. Drawing on recent empirical evidence showing that AI-assisted researchers produce 50-60% more papers, that AI-generated content pervades up to one-fifth of publications in some fields, and that autonomous AI research systems are in active development, this essay argues that academic knowledge is transitioning from scarcity to abundance. We contend this transition will be as consequential for scholarship as the digital revolution was for media. To navigate it, we propose a three-layer framework for the post-scarcity knowledge economy: Layer 1 (Generation), where AI systems produce research at scale; Layer 2 (Filtration), where AI curates output for domain specialists; and Layer 3 (Epistemic Governance), where human academics serve as the scarce resource in an abundant system, formulating consequential questions, exercising ethical judgment, interpreting findings in context, and directing the research agenda. When production becomes abundant, value migrates to governance. The academy must proactively redesign its institutions around this new value locus before AI renders its current structures obsolete.
CL: It’s nice to get a view from the social sciences on how AI is disrupting academic research there as well as in the physical and biological sciences. However the lessons are similar. The cost and effort of producing academic units of research is tending to zero. Where we need to consider focusing our efforts now is around filtering and governance. Many papers are predicting that peer review is at a crisis point, but this essay reimagines what we need to do in a world where traditional review models cannot scale to overwhelming AI outputs. Whether we can get there is another matter.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6300605
Claude peer review: https://claude.ai/public/artifacts/fe1ac9b7-336d-49d4-831f-c018a236725f
2//
Shoulders
shoulders.rs - 1 Mar 2026
I get to play with many automated review platforms, but this one stood out to me on first examination. Shoulders is positioned as an ‘AI Workspace for Researchers’ - claiming to be able to assist with writing, reference management, coding - and of course peer review (advice: pre-review your own work before submission). The review I got back was incredibly useful, with a traditional style report at the top, and then a rendering of the document, with inline comments where appropriate. I’ve not explored the other features at all, but so far the reviews look a step above similar researcher-facing efforts in this space. I also like the ‘bring your own keys’ pricing model.
3//
How AI can improve the quality of peer review
Phys.org and various - 24 Feb 2026 - 3 min read
Lots of buzz about a Nature Machine Intelligence paper this week entitled “A large-scale randomized study of large language model feedback in peer review” - but it was something we’d already covered in April (Scalene 35 if you want to go back and check). The essence of the paper is that AI provided comments to peer reviewers of ICLR 2025 submissions on vague or incorrect feedback, and ‘unprofessional’ tone or remarks. They then invited reviewers to edit their reviews. Just over a quarter did, with slightly longer reviews and increased engagement during rebuttals.
https://phys.org/news/2026-02-ai-quality-peer.html
Nature MI: https://doi.org/10.1038/s42256-026-01188-x
Preprint: https://arxiv.org/abs/2504.09737
Scalene 35: https://scalene-peer-review.beehiiv.com/p/scalene-35-persistent-prompting-reviewer-feedback-fda
CL: I’ve been thinking a lot recently about the optimum amount of information to disclose to an author in a peer review report. I remember reading something about 900-1000 words being the best received by authors, so I wonder if any increase in perceived quality is, at least in part, due to the increased length of the edited reports?
4//
In hybrid human-AI systems, who’s driving?
LinkedIn - 28 Feb 2026 - 2 min read

Forgive me the title, LinkedIn updates don’t really come with titles, but I thought it sums up Nicolò Zarotti’s post quite well (as does his Gemini-derived image above). He was surprised - and not in a good way - when a journal auto-enrolled him in an AI peer review pilot without his consent. While being empathetic to its stated aims of extracting claims and finding gaps in the manuscript, he was exasperated that the AI tool asked him very direct questions about what to look for in the manuscript. He poses 4 very good questions at the end for any publisher to consider.
5//
Expectations for Mendelian randomization research in PLOS One
PLOS One - 18 Nov 2025 - 3 min read
This editorial from the end of last year describes (without naming) the effect AI has had on the ability of researchers to concoct feasible-sounding research papers without necessarily doing the work (inference is mine, the editorial is non-judgemental, but I am). At the very least these studies are only done in silico, and not explored further.
Since 2022, the publication of Mendelian randomisation papers has doubled year-on-year. As a result, desk editors have been given the power to reject more unsuitable papers. In April 2024, almost 100% of rejections were by editorial staff.
MR papers are relatively easy to construct from massive datasets (smoking causes cancer, drug A leads to condition B), so this should be a warning for other fields that notice upticks in certain kinds of studies. - is everything what is seems? If not, stricter protocols are required.
And finally…
Artificial intelligence–assisted statistical analysis and statistical review: evidence (2023–2025) and implications for internal medicine - statistical review by AI is improving rapidly, but still needs professional oversight.
AI is not a peer, so it can’t do peer review - frames peer review as a human conversation.
Will AI Help or Hinder Scientific Publishing? - contains some good links to back up the concepts of Nikesh’s talk in the intro.
A peer review whodunnit: was it AI after all? - a fun investigation into a peer review report that was seemingly AI-generated. However, the conclusion (inconclusive) is a real-world problem we can’t seemingly solve.
Let’s chat
I’m always happy to chat - just reply to this email. Or I’ll be at London Book Fair in mid-March if you want to meet IRL.
Curated by me, Chris Leonard.
If you want to get in touch, please simply reply to this email.