Here is the link to the blogpost, that actually describe what this is: https://github.com/google-research/timesfm?tab=readme-ov-fil...
I think you meant to link this page: https://research.google/blog/a-decoder-only-foundation-model...
That takes me to the same content as the submission, a GitHub repo (Chrome on iOS)
Wish they gave some numbers for total GPU hours to train this model, seems comparatively tiny when compared to LLMs so interested to know how close this is to something trainable by your average hobbyist/university/small lab