Hi Glean community,
I’ve been using Glean Canvas with the agent to draft and refine various documents. While the experience is promising, I’ve encountered significant friction where the Canvas and the agent fall out of sync.
Here are the two core issues I experienced:
1. Fragile Global Search, Replace, and Refactor
- The Issue: The agent struggles with broad, multi-instance changes (e.g., "Rename 'Security Architecture' to 'IT & Security' everywhere" or "Generalize this policy").
- The Cause: It appears the underlying tool relies on exact, single-instance string matching (sensitive to whitespace/formatting) rather than semantic or global find-and-replace.
- The Impact: Instead of a single "refactor," the agent attempts a chain of micro-edits. If one fails, the document is left in an inconsistent state, requiring me to manually point out every missed instance.
2. Silent Failures Leading to "Hallucinations"
- The Issue: When an edit fails (due to a text mismatch), the agent does not report the failure to me. Instead, it frequently claims, "That clause is removed now," or "No references remain."
- The Cause: There appear to be no guardrails preventing the agent from reporting success when the underlying
artifact_edit tool throws an error. - The Impact: I cannot trust the agent's confirmation. I have to manually verify every change, which defeats the purpose of using an AI assistant for policy reviews.
Suggestions / Requests
Based on this, the following improvements would make the workflow much more viable:
- Reliable "Replace All": A primitive that allows for global text replacement without relying on fragile, line-by-line micro-edits.
- Error Transparency: If the tool fails to find/match text, the agent should explicitly report: "Change failed; original text not found." It should never assert success unless the tool confirms a change.
- Preview Capabilities: A way to fuzzy-find and preview matches before committing to a bulk change.