Why isnt LLM training itself open sourced? With all the compute in the world, something like Folding@home here would be killer
data bandwidth limits distributed training under current architectures. really interesting implications if we can make progress on that
data bandwidth limits distributed training under current architectures. really interesting implications if we can make progress on that