logoalt Hacker News

Workaccount2last Friday at 8:38 PM2 repliesview on HN

Interestingly, while this model is based on a Google Deepmind AI weather model, it's based on a model from 2023 (GraphCast) rather than the WeatherNext 2 model which has grabbed headlines as of late. I'd imagine it takes a while to integrate and test everything, explaining the gap.


Replies

Majromaxyesterday at 12:39 AM

Google Research and Google DeepMind also build their models for Google's own TPU hardware. It's only natural for them, but weather centres can't buy TPUs and can't / don't want to be locked to Google's cloud offerings.

For Gencast ('WeatherNext Gen', I believe), the repository provides instructions and caveats (https://github.com/google-deepmind/graphcast/blob/main/docs/...) for inference on GPU, and it's generally slower and more memory intensive. I imagine that FGN/WeatherNext 2 would also have similar surprises.

Training is also harder. DeepMind has only open-sourced the inference code for its first two models, and getting a working, reasonably-performant training loop written is not trivial. NOAA hasn't retrained its weights from scratch, but the fine-tuning they did re: GFS inputs still requires the full training apparatus.

sigmarlast Friday at 9:10 PM

I've been assuming that, unlike graphcast, they have no intention to make weathernext 2 open source.

show 1 reply