Even if the LLM theoretically supported this, it's a big leap of faith to assume that all models on all their CPUs are always perfectly synced up, that there are never any silently slipstreamed fixes because someone figured out how to get the model to emit bad words or blueprints for a neutron bomb, etc.
Even if the LLM theoretically supported this, it's a big leap of faith to assume that all models on all their CPUs are always perfectly synced up, that there are never any silently slipstreamed fixes because someone figured out how to get the model to emit bad words or blueprints for a neutron bomb, etc.