Gell-Mann Amnesia and ChatGPT

I thought ChatGPT would be a great tool to help hunt down obscure references in Pali and other more academic type stuff surrounding Buddhism. An LLM should excel at this sort of thing, which is fetching answers from a ridiculously large textual corpus.

Alas, I get laughably bad results. Even worse, there’s no indication of the confidence level of the results.

An example is that I asked chatGPT to fetch where the phrase Upanīyati loko” comes from in the Pali canon. The answer:

Upanīyati loko” is a phrase from the Pali Canon, specifically from the Dhammapada, which is a collection of sayings of the Buddha in verse form. The phrase is found in verse 174 of the Dhammapada. Here is the verse along with its translation:

Pali: Upanīyati loko, upanīyati lokasmiṃ; upanīyati jīvo, upanīyati jīvasmiṃ.”

Translation: The world is led by the mind, the world is led by the world; the mind is led by the mind, the mind is led by the mind.”

Luckily I caught this as being completely out of left field because the translation, which is (sort of) the first verse of the Dhammapada was so off. I went and looked up verse 174. No match.

Using a conventional search engine, I eventually found that the right reference is MN 82.

I keep getting this sort of non-sensical, but entirely confident answer from chatGPT about more technical Buddhist questions. Maybe Theravāda Buddhism is just too obscure; I’d be curious if others doing Christian research run into similar problems.

This gets me thinking about Gell-Mann Amnesia. We’ve really got to be careful trusting this contraption, and it’s concerning just how quickly people are turning to chatGPT for very citical things.