I don't really buy this post. LLMs are still pretty weak at long contexts and asking them to find some patterns in data usually leads to very superficial results.
No one said you cannot run LLMs with the same task more than once. For my local tooling, I usually use the process of "Do X with previously accumulated results, add new results if they come up, otherwise reply with just Y" and then you put that into a loop until LLM signals it's done. Software-wise, you could add so it continues beyond that too, for extra assurance.
In general for chat platforms you're right though, uploading/copy-pasting long documents and asking the LLM to find not one, but multiple needles in a haystack tend to give you really poor results. You need a workflow/process for getting accuracy for those sort of tasks.
No one said you cannot run LLMs with the same task more than once. For my local tooling, I usually use the process of "Do X with previously accumulated results, add new results if they come up, otherwise reply with just Y" and then you put that into a loop until LLM signals it's done. Software-wise, you could add so it continues beyond that too, for extra assurance.
In general for chat platforms you're right though, uploading/copy-pasting long documents and asking the LLM to find not one, but multiple needles in a haystack tend to give you really poor results. You need a workflow/process for getting accuracy for those sort of tasks.