[GH-ISSUE #8125] [FR] Improve UI support for reasoning models (local AI) #3575

Open
opened 2026-03-23 21:31:26 +00:00 by mirror · 0 comments
Owner

Originally created by @yar85 on GitHub (Jul 17, 2025).
Original GitHub issue: https://github.com/AppFlowy-IO/AppFlowy/issues/8125

Description

Some models such as Qwen3 respond with reasoning text between <think> tags. Currently, this part of the response is displayed as is, making the chat difficult to read.
It would be great to keep the thinking mode (it makes the answers better), but hide thoughts in the AI responses.

Impact

This improvement will be useful for all users using local AI models with reasoning capabilities.

Additional Context

With ollama, this can be achieved in one of two ways:

  • stripping <think> tags completely, the options of which depend on the type of interaction with ollama: running a model with --hidethinking parameter (CLI) / ignoring the thinking field in the response object (JSON API) / replacing it with an empty string using regex;
  • wrapping the reasoning text in an expandable text section above the chat response, similar to ollama-webui and other LLM client apps.

Of course, the second way is preferable. But if it will be inconvenient in some places of the UI - the first is also ok.

Originally created by @yar85 on GitHub (Jul 17, 2025). Original GitHub issue: https://github.com/AppFlowy-IO/AppFlowy/issues/8125 ### Description Some models such as Qwen3 respond with reasoning text between `<think>` tags. Currently, this part of the response is displayed as is, making the chat difficult to read. It would be great to keep the thinking mode (it makes the answers better), but hide thoughts in the AI responses. ### Impact This improvement will be useful for all users using local AI models with reasoning capabilities. ### Additional Context With ollama, this can be achieved in one of two ways: - stripping `<think>` tags completely, the options of which depend on the type of interaction with ollama: running a model with `--hidethinking` parameter (CLI) / ignoring the `thinking` field in the response object (JSON API) / replacing it with an empty string using regex; - wrapping the reasoning text in an expandable text section above the chat response, similar to ollama-webui and other LLM client apps. Of course, the second way is preferable. But if it will be inconvenient in some places of the UI - the first is also ok.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
AppFlowy-IO/AppFlowy#3575
No description provided.