> how are you gonna trust something that can casually make such obvious mistakes?
In many cases, a human can review the content generated, and still save a huge amount of time. LLMs are incredibly good at generating contracts, random business emails, and doing pointless homework for students.
And humans are incredibly bad at "skimming through this long text to check for errors", so this is not a happy pairing.
As for the homework, there is obviously a huge category that is pointless. But it should not be that way, and the fundamental idea behind homework is sound and the only way something can be properly learnt is by doing exercises and thinking through it yourself.