Hey everyone,
I've been spending a lot of time inside Glean Assistant lately — building agents, running complex multi-step workflows, and generally pushing it to its limits. Love what we've got. But I keep running into one friction point that I think has a clean solve, and I wanted to throw it out here.
The Problem
When I ask the Glean Assistant or an Agent to do something moderately complex — say, "pull all parent-child use cases from this tracker, enrich them with Confluence URLs, and export as a spreadsheet" — the AI often has to make assumptions about how I want things done. Sometimes it guesses right. Sometimes it doesn't, and I end up going back and forth correcting course after the fact.
We've all been there: the AI runs off, does a bunch of work, and then you go "no no no, I meant the other thing." Wasted tokens, wasted time.
The Proposal
When GA or the Agent encounters ambiguity, instead of guessing or asking a wall-of-text question, it can pop up a clean, clickable clarification panel — radio buttons, multiple choice, one click per question, then it proceeds.
It would look something like:
How should I handle X?
○ Option A — description
○ Option B — description
○ Option C — description
○ Option D — Something else (free-text input)
Three clicks and the AI has perfect context. No typing. No misunderstandings. No wasted cycles.
So basically:
Add support for structured inline clarification panels within Assistant and Agent conversations. Specifically:
- Mid-conversation rendering — When the AI determines it needs disambiguation, it outputs a structured schema (radio buttons / checkboxes / dropdowns) that the frontend renders as a clickable panel inline in the chat.
- Selection feeds back as context — User clicks, selections auto-inject as context for the next AI step. No typing required.
- Skip/default support — Optional questions can be skipped, letting the AI proceed with sensible defaults.
- Progress indicators — For multi-question clarifications, show a simple step indicator (Question 1 of 3, etc.).
Prototype
I actually went ahead and built a working HTML prototype of what this could look like in Glean's design language (screenshot below).
Built it right inside Glean Assistant using the HTML artifact capability, ironically enough. 😄
(screenshot attached)
Why This Matters
- Reduces wasted iterations — Front-loads ambiguity resolution instead of correcting after the fact
- Better for complex Agent workflows — Agents doing multi-step tasks (data extraction, document generation, cross-referencing) desperately need structured checkpoints
- Lower barrier for non-technical users — Clicking is easier than knowing how to phrase a clarification. This makes the AI more accessible to everyone, not just prompt engineers
- Competitive parity — Some of the AI with IDE already have it.
Who Benefits
Honestly? Everyone. But especially:
- Agent builders running multi-step automations
- Teams using Assistant for document generation and data workflows
- Anyone who's ever said "that's not what I meant" after an AI response
Would love to hear if others have hit this same friction. And if someone from the Glean product team is reading — the building blocks are already in your platform. This is an assembly job, not a ground-up build. 🙂