logoalt Hacker News

jdw64today at 4:58 PM2 repliesview on HN

https://www.alibabacloud.com/help/en/model-studio/context-ca... I’ve also been testing models like Opus, Codex, and Qwen, and Qwen is strong in many coding tasks. However, my main concern is how it behaves in long-running sessions.

While Qwen advertises large context windows, in practice the effectiveness of long-context usage seems to depend heavily on its context caching behavior. According to the official documentation, Qwen provides both implicit and explicit context caching, but these come with constraints such as short TTL (around a few minutes), prefix-based matching, and minimum token thresholds.

Because of these constraints, especially in workflows like coding agents where context grows over time, cache reuse may not scale as effectively as expected. As a result, even though the per-token price looks low, the effective cost in long sessions can feel higher due to reduced cache hit rates and repeated computation.

That said, in certain areas such as security-related tasks, I’ve personally had cases where Qwen performed better than Opus.

In my personal experience, Qwen tends to perform much better than Opus on shorter units like individual methods or functions. However, when looking at the overall coding experience, I found it works better as a function-level generator rather than as an autonomous, end-to-end coding assistant like Claude.


Replies

ezekiel68today at 7:00 PM

TBF, it's certainly best practice, advised by the model providers themselves, to cut sessions short and start new ones.

Anthropic's "Best Practices" doc[0] for Claude Code states, "A clean session with a better prompt almost always outperforms a long session with accumulated corrections."

[0] https://code.claude.com/docs/en/best-practices

hedoratoday at 7:46 PM

Unless stuff changed since I last checked, context caching just reduces cost / latency. It does not change what tokens are emitted.