Does anyone here understand "interleaved scratchpads" mentioned at the very bottom of the footnotes:
> All evals were run with a 64K thinking budget, interleaved scratchpads, 200K context window, default effort (high), and default sampling settings (temperature, top_p).
I understand scratchpads (e.g. [0] Show Your Work: Scratchpads for Intermediate Computation with Language Models) but not sure about the "interleaved" part, a quick Kagi search did not lead to anything relevant other than Claude itself :)
based on their past usage of "interleaved tool calling" it means that the tool can be used while the model is thinking.
https://aws.amazon.com/blogs/opensource/using-strands-agents...