It depends on the purpose for the model. AFAIK LLMs aren't particularly capable at researching answers, relying more on having 'truth' baked in to their weights, so if it takes 12 months to train up a crowd-trained LLM it'll be 12 months behind the times.
How serious a risk is poisoned weights?
Can we leverage the cryptobros into using LLM training as a proof of work?
It depends on the purpose for the model. AFAIK LLMs aren't particularly capable at researching answers, relying more on having 'truth' baked in to their weights, so if it takes 12 months to train up a crowd-trained LLM it'll be 12 months behind the times.
How serious a risk is poisoned weights?
Can we leverage the cryptobros into using LLM training as a proof of work?