One of our most recent IG Community Roundtables focused on the records-worthiness of AI prompts and outputs, and continued what had already been a pretty spirited discussion around these parts. Perhaps not surprisingly, there is still a lot of uncertainty surrounding the issue. However, there is one answer that everybody seemed to agree with:
It depends.
As one participant framed it: “If an employee searches Google, is that a record? Probably not. But what if an HR employee prompts an AI chatbot with ‘My boss is harassing me – what should I do?’ That’s a different story entirely.”
The distinction lies in intent, purpose, and evidentiary value. A casual query about Cleveland’s population isn’t a record. But using AI to summarize meeting transcripts, redact sensitive content, or inform business decisions creates potential legal and compliance obligations. For instance, if an AI-driven redaction becomes legally relevant later, the prompt itself might become evidence because it speaks to state of mind.
You may have noticed that there are a lot of “ifs” and other conditionals in the preceding paragraphs, and that’s because it’s still early days in the world of practical AI implementation – especially as it relates to records and information governance. So what’s an organization to do? A few things, maybe:
- Assess your AI applications individually, taking into consideration industry requirements, regulatory exposure, and business value. Risk tolerance and use case matter, and you have to be sure you can defend your choices should push come to shove.
- Don’t let perfect be the enemy of good. We didn’t get Slack or Teams governance perfect right away (and still haven’t, to be honest), and AI is no different. Start with reasonable policies and iterate from there.
- Capture what matters. Focus on prompts and outputs that may have evidentiary value, inform decisions, or create compliance obligations. Not everything needs to be preserved forever – a fact that, in fact, isn’t limited to AI-involved applications.
- Invest in training. Policies fail without user education. Help employees understand when and how to use AI, and to be aware of any potential record-keeping ramifications.
Having now said this, I guess there is a second bit of clarity to have emerged from our Roundtable, and that is this: organizations that establish thoughtful AI governance frameworks now will be far better positioned than those who wait. For it isn’t just about compliance – it’s about reducing all manner of risk while simultaneously unlocking value in terms of getting things done.
Just as is true with every other piece of technology that has come down the pike.
Want to know more? Click here to schedule some time to talk. No charge; it’s all just part of the service.
