logoalt Hacker News

vineethyyesterday at 7:28 PM2 repliesview on HN

I think it's important to note that there's nothing forbidding LPU style determinism from being used in training. They just didn't make that choice.

Also tenstorrent could be a viable challenger in this space. It seems to me that their NoC and their chips could be mostly deterministic as long as you don't start adding in branches


Replies

ossa-mayesterday at 7:47 PM

You're right but my understanding is that Groq's LPU architecture makes it inference-only in practice.

Like Groq's chips only have 230MB of SRAM per chip vs 80GB on an H100, training is memory hungry as you need to hold model weights + gradients + optimizer states + intermediate activations.

show 1 reply
bionhowardyesterday at 7:53 PM

Would SRAM make weight updates prohibitive vs DRAM?