Discussion about this post

User's avatar
grfaith's avatar

Good post. I imagine that citations are meant to be a good-faith demonstration that someone worked all of this out and isn't just guessing in general. The fear of bad AI citations is that the writer themselves didn't think all of this through and was relying on AI logic (which we already know is wobbly and fraught).

If we run with this idea, then it seems like the AI is kind of a breach of contract. Like discussing whether the bun you had for breakfast was really made by grandma or if it was made in a factory.

We have some notions of craft v mass produced in materials and media, but I don't think we have a really good way to think about mass production of ideas. That's going to be tricky, in part because being an intellectual has had it's own cache and status. Introducing mass production of ideas destabilizes that social economy.

Not sure what I think about that quite yet.

GREGORY MCISAAC's avatar

Science "...is strengthened when claims help us to make sense of the world around us, and that is integrated into shared understanding."

I suggest inserting the phrase "empirically validated" to modify "claims" and maybe add "...to enhance broadly beneficial outcomes" at the end. I accept that "beneficial outcomes" is not fundamental to science, but beneficial outcomes would likely help strengthen science within the broader socio-political context.

Otherwise, I agree with much of your critique of the Nature paper, but your critique seems to be something of a strawperson argument because the paper does not appear to be proposing to replace science or even lit reviews with LLMs. Rather, it seems to be a limited comparison of LLMs under controlled conditions. Maybe at some point LLM will be able to produce a decent fist draft of a lit review, which can then be edited by human experts with a range of experience. The LLMs might be able to identify literature that would be overlooked by human experts.

No posts

Ready for more?