Like you say, some people are interested in keeping the discussion for humans only. Although we can't really know whether any opinion expressed here is coming from a human or not, including this one.
As for "AI" companies, their only interest is increasing their valuation. Historically speaking, most companies prioritize short-term profits, but during a bull market the incentives are even more skewed towards it. So poisoning the well of training data is seen as a future problem for someone else to figure out, or not. In the meantime, carpe pecuniam.
> If LLMs torpedo human engineering, who is going to create the new infrastructure (tools, frameworks, programming languages, etc) that LLMs are making such good use of today?
LLMs, of course. :) I don't think the people building these tools haven given these topics any serious thought. Whatever concerns they claim to have, regarding safety and otherwise, are merely performative.