- Scalene
- Posts
- Scalene 56: Anonymous / Tsunami / Unreasonable
Scalene 56: Anonymous / Tsunami / Unreasonable

Humans | AI | Peer review. The triangle is changing.
Man, financial year end is a busy time. Apologies for lack of updates recently, but I think I’ve got a plan. Here’s the first of what may be 2 (yes, count ‘em) updates this week. Rather than try to shoehorn 4 weeks worth of saved material into one post, I’m going to treat you all to a double whammy. One update below is included because it scratches a personal itch(!) I have around whether AI is best used for manuscript assessment, or whether it should be used upstream in defining the science which takes place. It can and will be both, but thinking about what science takes place first, may lead to better results in evaluation later.
29th March 2026
1//
Saving peer review from AI slop requires getting rid of anonymous submissions and reviews
Togelius - 01 Mar 2026 - 9 min read
The anonymity that was meant to democratise science has instead hollowed it out, and AI is exploiting the void, according to Togelius. Reviewers have no recognition, no accountability, and no reason not to just ask Claude to do it for them. His fix? Remove anonymity as the default. Named authors stake their reputation on their work; named reviewers earn recognition. He pairs this with an argument for smaller, community-sized venues where people actually know each other, so that bad science — AI-generated or otherwise — is visible and socially costly rather than lost in a sea of anonymous submissions.
CL: The accountability argument for named reviews is underappreciated in the peer review reform conversation, which tends to fixate on tooling. If a reviewer's name is attached to their report, they have a reason to make it good — and a reason not to delegate it wholesale to an LLM. We've had open review experiments — PeerJ, F1000Research, some overlay journals — and the consistent finding is that early-career researchers are the most reluctant to sign their names to critical reviews of senior figures' work. An anonymous option, as he concedes, would still need to exist. But perhaps that's the wrong framing: not "should it be anonymous by default?" but "should reviewers be offered credit for named reviews, in a way that makes the choice meaningful?" That's something publishers could actually pilot.
2//
Academics Need to Wake Up on AI
Popular by Design - 02 Mar 2026 - 8 min read
Something to read that will possibly make you nod and frown in equal measure. From “AI can already do social science research better than most professors” to “The academic paper is a dead format walking” and “Academics hold AI to absurd double standards” - there are so many hot takes in such a small space your computer may overheat. I loved it - not least because “Apart from the doomsday scenarios, AI is genuinely exciting”.
3//
An AI Tsunami is about to Hit Science
Cesarhidalgo.com - 05 Mar 2026 - 8 min read
I don’t see so many descriptions of people using automated review tools as I would like. This account matches my own experiences where the author creates an AI-generated paper and then sends it - in this case - reviewer3.com. Some improvements are suggested, incorporated, and the fed back into reviewer3. Thus begins a potentially endless cycle of requested improvements as the automated tools are always finding some fault - sometimes ones they’ve even suggested themselves. Reviewer3 is not alone here, it’s more that there is no stopping point with automated systems as there is with human review, where a truce on certain matters can sometimes be declared between author and reviewer.
4//
Claude Code 27: Research and Publishing Are Now Two Different Things
Scott Cunningham - 02 Mar 2026 - 14 min read

This is an in-depth analysis of how LLMs are about to change academic publishing. Not just authoring or reviewing, but the whole shebang. Interestingly this analysis is based on economics journals, which seemingly have fixed publication numbers and (the horror!) submission fees. Some of these findings won’t apply to infinitely expandable OA titles with no submission fees - but the timescale for disruption (3 months, not 3 years) feels about right:
Even with AI screening at the desk, the noise doesn’t disappear — it most likely just migrates. Perfect automated screening can answer “is this paper competent?” But it can’t answer “is this paper more important than that one?” And when 20,000 competent papers are competing for 3,800 slots, the final decision rests on something other than quality — editor taste, topic fashion, referee mood, institutional priors. Below 1% acceptance, you’re selecting among a crowd of qualified papers using criteria that are increasingly arbitrary.
5//
UnreasonableLabs
unreasonablelabs.ai - Mar 2026
I’m increasingly warming to the opinion that the real value of AI in science isn’t for peer review or writing manuscripts, but in defining (or at least suggesting) what science is worked on in the first place. This post introducing UnreasonableLabs makes for compelling reading:
Our system simulates how [e.g.] changes in cortical tension reshape individual cells, propagate stresses across a tissue sheet, and ultimately drive a macroscopic fold, then connect those predictions to measurable perturbations suchas laser ablation or optogenetic control of myosin. Rather than the conventional way of relying on correlations across datasets, it reasons forward from physical mechanisms by carrying out virtual experiments, making causal predictions that can be tested through targeted perturbations.
The website looks like it was influenced too strongly by the world of DeSci (yes, that’s a design burn) but the principles behind it are exciting to me at least. Forgive this slight detour from peer review and think of it as a way of improving what comes into the peer review pipeline.
And finally…
It feels so long ago now, but I promised to include a link to Nikesh Gosalia’s talk at R2R on bias in AI in academic workflows - so here it is: https://youtu.be/ZYIsSTqUiig?si=GlYznyeGoAf4vnkV
Another Youtube link - this time from The Secret Editor on some of the ways AI is being used by journals, often without authors knowing about it. The Brutal truth About Peer Review, Profit, and “Bonkers’ Publishing.
A Scholarly Kitchen post from my old colleague Ashutosh on The Value Challenge in Scholarly Publishing.
Yet more Youtube on AAAS Editors-in-Chief address tough questions facing scientific journals
This was a bit of fun for the linguistically-minded amongst us: How far back in time can you understand English?
Let’s chat
I’m always happy to chat - just reply to this email.
Curated by me, Chris Leonard.
If you want to get in touch, please simply reply to this email.