Better Error Messages for LLMs (and Everyone Else)
Historically, error messages like Query failed
or Failed to create configuration
were tolerated, maybe even expected. Human users could usually fill in the gaps with context: logs, UI feedback, experience, intuition. Those messages weren’t great, but they worked well enough.
Now that’s changing.
LLMs Don’t Guess Well
Large language models are increasingly the first line of interaction for users navigating software systems. Whether embedded in chat interfaces, docs assistants, or generic UIs like claude.ai, they’re trying to help users understand state with very little to go on.
Vague errors kill that. If an LLM sees only “Query failed,” it has no reliable path forward. It’ll guess. Creatively, sometimes, but often incorrectly. Unlike humans, it can’t check logs or intuit which part of the system it’s in. The less specific the message, the more hallucination we invite.
Our Current Error Strategy: Bare Minimum
The current setup distinguishes between:
- user errors, which are thrown directly and shown as-is
- application errors, which bubble up and usually collapse into
HTTP 500 - Server Error
Neither retains much nuance. When the message reaches the top, it’s often a single, context-free sentence.
Narrative Errors: Layers, Not Lines
We can fix this by treating error messages as narrative chains, not isolated strings. Even without a whole new exception framework, we can include hierarchical detail inline.
Example from Keboola’s transformation pipeline:
- run transformation queries
- observe resulting tables
- compare against metadata
- attempt schema alignment (e.g. add a column)
- failure: “ALTER TABLE failed: ‘new_column cannot be nullable’”
Right now, users might see only step 5, but they can tell from the UI they're seeing it in. A more useful message could be:
Finished running transformation queries. Starting output mapping. Comparing actual table structure to expected metadata. Attempting to add new column. Adding column failed: ALTER TABLE failed: ‘new_column cannot be nullable.’
That gives both humans and LLMs a chance. Even if the model doesn’t understand every term, it can place the failure in a broader flow and reason about it—or look it up.
Error Messages Are UI Now
As LLMs become core interfaces, every error string becomes part of your user experience. For the model, it might be the only context it gets. So we need to make those strings count.
Clear, specific, narrative-style errors aren’t just nicer. They’re required. Because “Query failed” isn’t just lazy—it’s now actively hostile to your most helpful user.