Another solution - in addition or instead - is requiring LLM output to be labeled.
The biggest danger of LLMs is impersonating humans. Obviously they have been carefully constructed to be socially appealing. Think of the motivation behind that:
It is almost completely unnecessary to LLM function and it's main application is to deceive and manipulate. Legal regulation of LLMs should ban impersonation of humans, including anthropomorphism (and so should HN's regulation). Call an LLM 'software' and label it's output as 'output'.
Imagine how many problems would be solved by that rule. Yes, it's not universally enforceable, but attach a big enough penalty and known people and corporations will not do it, and most people will decide it's not worth it.