Overview
When Assistant is set to Auto, users do not have visibility into which model was actually used for the response. That makes it harder to understand differences in response quality, tone, and speed, and also makes it harder to give useful feedback.
Proposed Behavior
A simple option would be to show a small label after each response, for example:
- Model used: GPT-5.4
- Model used: Claude Sonnet 4.5
- Model used: Auto โ GPT-5.4
A slightly richer version could also show this in a tooltip, response details panel, or chat metadata so the UI stays clean.
Bonus:
- Show the model for each response in the thread, not just the whole chat
- Make it possible to copy that metadata when sharing feedback with admins or support
- Optionally expose whether the response used Fast, Thinking, or another reasoning mode alongside the selected model