I believe that the right regulation makes a difference, but I honestly don't know what that looks like for AI. LLMs are so easy to build/use and that trend is accelerating. The idea of regulating AI is quickly becoming like the idea of regulating hammers. They are ubiquitous general purpose tools and putting legislation specifically about hammers would be deeply problematic for, hopefully, obvious reasons. Honest question here, what is practical AND effective here? Specifically, what problems can clearly be solved and by what kinds of regulations?
The most sane version of regulation IMO is the (already passed) EU AI Act. It's less about control of AI itself, more about controlling inputs/outputs. Tell users when they're interacting with an AI, mark/disclaimer AI-generated content, don't use AI in high-risk scenarios, etc. Along the lines of "we don't regulate hammers, but we regulate you hitting people with a hammer".
https://artificialintelligenceact.eu/