It is my pleasure and honor to introduce to you a new buzzword. It’s “Ruminant AI,” which I invented just last week and named after animals like cows and sheep that chew on previously ingested material (the cud). Here’s why:
AI engines like ChatGPT ingest information from as many sources as they can be provided with, including notoriously inaccurate and/or unvetted places like the Internet. They then produce new information based on what they’ve ingested – and because some of that source material is of, um, questionable quality, the new information is often wrong. (This is what’s known as “hallucination” in AI circles.)
Ruminant AI describes what happens when those same engines include that new information (which they themselves created) as a source to ingest the next time around – meaning that they end up digesting their own inputs multiple times, just as ruminant animals do.
This isn’t a problem as long as the information being ingested is accurate. But how do we know? Who do we ask? One New York attorney asked ChatGPT if the results of the research he asked it to do was accurate, and ChatGPT said “yes.” Except it wasn’t, so that didn’t work out so well. And now that attorney finds himself shoveling out from under a pile of bovine ordure.
See what I did there?
* If you liked this post, please “Like” it and tell all your friends! *