- Scalene
- Posts
- Scalene 14: ScienceCritAI (again) / measures of novelty / a bit of fun
Scalene 14: ScienceCritAI (again) / measures of novelty / a bit of fun
Humans | AI | Peer review. The triangle is changing.
My usual filters on social media and the wider web are not really working this week, having been overloaded with news of the coming Peer Review Week. I won’t add to this too much, but see issue 13 to see where to catch me speaking online. Oh, and look out for me on The Scholarly Kitchen this week with a personal view on how AI is best used in reviewing academic research today. Anyway, on with the news…
22nd September 2024
// 1
The Story of @ScienceCritAI
Medium - 21 September 2024 - 7 min read
It’s just plain weird to be this excited by something so nerdy on a Sunday morning, but here I am (see above). A few weeks ago I highlighted the very secretive and interesting twitter account @ScienceCritAI - and yesterday the creator revealed himself and explained exactly how the whole process works.
https://medium.com/@billcockerill/from-pdfs-to-tweets-how-tools-like-sciencecritai-could-transform-scientific-peer-review-142786283ecf
I’m not going to quote the interesting bits from the post above, cos that would be almost all of it, but I am going to highlight this nice closing conclusion, which perfectly summarises where we are now with AI and peer review: “This tool is about an AI-human partnership: AI does the grunt work, humans make final decisions.”
Still no public access yet though 🙁.
// 2
A Comprehensive SWOT Analysis of AI and Human Expertise in Peer Review
The Scholarly Kitchen - 12 Sept 2024 - 4 min read
My Cactus colleague, Roohi Ghosh, recently posted a very balanced view on TSK about how AI can help, and potentially harm, in the peer review process. Her SWOT analysis shows that we can’t afford to ignore the enormous benefits of scale and speed of AI, but actually this provides us an opportunity to reinvent what we mean by peer review.
https://scholarlykitchen.sspnet.org/2024/09/12/strengths-weaknesses-opportunities-and-threats-a-comprehensive-swot-analysis-of-ai-and-human-expertise-in-peer-review/
An interesting comment from Sean Power at the end reveals some emerging attitudes to the ‘free labour’ issue of peer review amongst academics. This shows something I’m becoming to believe strongly, that peer review needs to be professionalized to survive in its current form (paying reviewers, giving them sabbaticals to just do reviews).
This is relevant to the AI question because, amongst academics jaded by the exploitative pressures of peer review, I would imagine some don’t feel any kind of moral responsibility for the review. They’ll check it’s as right as can be, but I doubt they’ll feel they should do even more than what’s expected.
IMO, the responsibility for dealing with this AI problem must sit with those paid to do their part of the process
// 3
Scientific Peer Review in an Era of Artificial Intelligence
LinkedIn - 18th September 2024 - 27 min read
I highlighted the book this is from (and indeed this specific chapter) a few newsletters ago, and now the author has made the full text available on LinkedIn. Not sure if this is allowed or how long this will remain online, but you can ‘preview’ the whole thing here:
https://www.linkedin.com/posts/sm-kadri-0a28bb14_book-chapter-titled-scientific-peer-review-activity-7242004974807740416-MBpH?utm_source=share&utm_medium=member_desktop
// 4
DeSci Labs measuring novelty
x.com - 20 Sept 2024 -1 min read
A common charge against using AI (or specifically, LLMs) in peer review is that they cannot reliably measure the novelty of a paper, no matter how well they analyse the manuscript in isolation. A (ahem) novel way of solving this problem has recently been announced by DeSci Labs
https://x.com/DeSciLabs/status/1837062536807542937
They use two different metrics - content novelty and context novelty - to determine if the research presented is moving the field on incrementally or seismically. The foundation work for these metrics is available in a fascinating Nature Communications paper entitled “Surprising combinations of research contents and contexts are related to impact and emerge with scientific outsiders from distant disciplines”
https://doi.org/10.1038/s41467-023-36741-4
// 5
Can AI be used to assess research quality?
Nature Index - 18 Sept 2024 - 9 min read
This contribution, to the excellent Artificial Intelligence special issue of Nature Index, will be of interest to anyone who has made it this far down the email. This quote gives you a flavour of what’s in store when you click that link
Notably, the AI boom also coincides with growing calls to rethink how research outputs are evaluated. Over the past decade, there have been calls to move away from publication-based metrics such as journal impact factors and citation counts, which have shown to be prone to manipulation and bias. Integrating AI into this process at such a time provides an opportunity to incorporate it in new mechanisms for understanding, and measuring, the quality and impact of research. But it also raises important questions about whether AI can fully aid research evaluation, or whether it has the potential to exacerbate issues and even create further problems.
And finally…
Hello to all new subscribers (and thanks to James Butcher for the shoutout in his newsletter last week). This is where the ‘other’ & more light-hearted stuff goes:
Not peer review, but interesting nonetheless: The Literature Review Network: An Explainable Artificial Intelligence for Systematic Literature Reviews, Meta-analyses, and Method Development: https://arxiv.org/abs/2408.05239
I try to stay off twitter/x other than for research for this newsletter, but it’s accounts like this which mean I can’t leave completely: https://x.com/ThreatNotation
And finally, finally - a bit of fun. My new and improved Bullshit Journal Generator is online. Publishers: gain inspiration for your next journal launch here: http://cjleonard.github.io
Let's do coffee!
It’s conference season again. I’m setting up meetings at STM & Frankfurt Book Fair over the coming weeks, so please get in touch (reply to this email) if you want to say hi at either of those events.
Curated by Chris Leonard.
If you want to get in touch with me, please simply reply to this email.