We have been playing with glm4.7 on cerebras which I hope to be the near future for any model; it generates 1000s of lines when you recover from a sneeze : it's absolutely irrelevant if you can see what it does because there is no way you can read it live (at 1000s of tokens/s) and you are not going to read it afterwards. Catching it before it does something weird is just silly; you won't be able to react. Works great for us combined with Claude Code; claude does the senior work like planning and takes its time: glm does the implementation in a few seconds.
That holds up for code generation (where tokens fly by), but not for tool use. The agent often stalls between tool calls, and those are exactly the moments I need to see what it's planning, not just stare at a blank screen