Using an LLM to handle a task for you seems a lot like letting a car move you. Cars will make you “fat and lazy” if you never move your body otherwise, but it’s fairly clear to see that this is avoidable.
The research seems to always get (intentionally?) misconstrued at headlines that LLM is “bad for you” as opposed to more mundanely stealing opportunities for exercise and practice of mental activities if you let it.
Just so long as we don't get something that is to LLMs as car-centric urban design is to cars.
Someone suggests putting all the stuff the average person needs within 15 minutes of the average person's home, and soon after we got a conspiracy theory about 15 minute cities being soviet control gates you'll need permission to get out of.
LLMs are already capable of inventing their own conspiracy theories, and are already effective persuaders, so if we do get stuck, we're not getting un-stuck.
> Cars will make you “fat and lazy” if you never move your body otherwise, but it’s fairly clear to see that this is avoidable.
why would you choose to compare ai to cars? you seem to be defending ai, but to compare it to cars... cars have been a horrible development.
I like how people come up with some analogy (and all analogies are wrong by definition) and then attack said analogy and based on that make a declaration on the original statement. But what if we use a different analogy: basically using an LLM is like skipping the whole learning process - not learning how to read, not learning how to write and not learning how to think, then what?