Currently, when Glean Assistant is processing a response, users only see a generic loading animation. This can sometimes lead to confusion, making it appear as though the software is unresponsive or hung. In contrast, other models such as ChatGPT, Gemini, Claude, and Grok provide transparency by showing the steps they're taking while generating a response.
I've received multiple reports from our users expressing frustration with the current experience, and I believe implementing a feature to display the assistant's thinking process would address these concerns effectively.
Not only would this improve user engagement, but it would also aid in troubleshooting by allowing users to identify at which step the assistant may have encountered an issue.