logoalt Hacker News

arcanustoday at 9:50 AM0 repliesview on HN

Hopper had 60 TF FP64, Blackwell has 45 TF, and Rubin has 33 TF.

It is pretty clear that Nvidia is sunsetting FP64 support, and they are selling a story that no serious computational scientist I know believes, namely that you can use low precision operations to emulate higher precision.

See for example, https://www.theregister.com/2026/01/18/nvidia_fp64_emulation...

It seems the emulation approach is slower, has more errors, and doesn't apply to FP64 vector, only matrix operations.