It is my pleasure and honor to introduce to you a new buzzword. It’s “Ruminant AI,” which I invented just last week and named after animals like cows and sheep that chew on previously ingested material (the cud). Here’s why:
AI engines like ChatGPT ingest information from as many sources as they can be provided with, including notoriously inaccurate and/or unvetted places like the Internet. They then produce new information based on what they’ve ingested – and because some of that source material is of, um, questionable quality, the new information is often wrong. (This is what’s known as “hallucination” in AI circles.)
Ruminant AI describes what happens when those same engines include that new information (which they themselves created) as a source to ingest the next time around – meaning that they end up digesting their own inputs multiple times, just as ruminant animals do.
This isn’t a problem as long as the information being ingested is accurate. But how do we know? Who do we ask? One New York attorney asked ChatGPT if the results of the research he asked it to do was accurate, and ChatGPT said “yes.” Except it wasn’t, so that didn’t work out so well. And now that attorney finds himself shoveling out from under a pile of bovine ordure.
See what I did there?
_______________
* If you liked this post, please “Like” it and tell all your friends! *
Steve Weissman, The Info Gov Guy™ • steve@hollygroup.com • 617-383-4655 • Principal Consultant, Holly Group • Member, AIIM Company of Fellows • Recipient, AIIM Award of Merit
When does the first lawsuit come from ChatGPT, or other AI engines, that integrate copyrighted material into their responses without attribution. Just because the information came from an AI response there is simply no way to identify previous copyrighted content that has been on the Internet, or elsewhere, now becoming ruminant AI that may be subject to litigation?
Nice buzzword!! It will definitely catch the reader’s attention!! How is “Ruminant AI” different from “hallucination — LLMs generating text that seems contextually correct, but in fact is factually incorrect.
Amitabh
@Amitabh Thanks for the question, and the kind words! The difference as I see it is that to be ruminant, the engine ingests the material it created itself, along with everything else that leads to hallucination, and essentially digests it again. In so doing, it further dilutes whatever level of accuracy it originally had. Put another way, as someone very graphically recently did (sorry for being so graphic!), “it pees in its own pool.”
@Bob Hey, great to ‘see’ you! Thanks for commenting. And pretty much to your point, these sorts of intellectual property issues are precisely why the Hollywood writers, and now the actors’ union too, are currently on strike. It’s a ‘thing,’ no doubt!