I agree with most of this, but my one qualm is the notion that LLMs "are particularly good at generating ideas."
It's fair enough that you can discard any bad ideas they generate. But by design, the recommendations will be average, bland, mainstream, and mostly devoid of nuance. I wouldn't encourage anyone to use LLMs to generate ideas if you're trying to create interesting or novel ideas.
Mainstream ideas are often good. That's why they're mainstream. Being different for being different isn't a virtue.
That being said I don't think LLMs are idea generators either. They're common sense spitters, which many people desperately need.
I'm torn.
I sometimes use them when I'm stuck on something, trying to brainstorm. The ideas are always garbage, but sometimes there is a hint of something in one of them that gets me started in a good direction.
Sometimes, though, I feel MORE stuck after seeing a wall of bad ideas. I don't know how to weigh this. I wasn't making progress to begin with, so does "more stuck" even make sense?
I guess I must feel it's slightly useful overall as I still do it.
I think it's just a confusing use of the term "generating." It's thinking of the LLM as a thesaurus. You actually generate the real idea -- and formulate the problem -- it's good at enumerating potential solutions that might inspire you.
"by design, the recommendations will be average"
This couldn't be more wrong. The simplest refutation is just to point out that there are temperature and top-k settings, which by design, generate tokens (and by extension, ideas) that are less probable given the inputs.
All LLM output is always dry as fuck quite frankly. At all levels from ideas and concepts through to the actual copy. And that’s dotted with pure excrement.
I think the only reason it’s seen as good anywhere is there are a lot of tasteless and talentless people who can pretend they created whatever was curled out. This goes for code as well.
If I offend anyone I will not be apologising for it.
Yes, I didn't get this portion at all. I feel as though letting an LLM brainstorm ideas for you would be worse in externally framing your thoughts than letting it write/proofread for you. If you pick one idea out of the 10 presented by the LLM, you are still confining yourself to the intersection of what the LLM thinks is important and what you think is important, because then you can never "generate" a thought that the LLM hasn't presented.
LLMs can come sometimes up with novel or non-obvious insights...or just regurgitate google-like results.
Asking the LLM better will return better than average and bland and mainstream results.
I have found the one of the better use cases of llms to be a rubber duck.
Explaining a design, problem, etc and trying to find solutions is extremely useful.
I can bring novelty, what I often want from the LLM is a better understanding of the edge cases that I may run into, and possible solutions.