logoalt Hacker News

Mordisquitostoday at 9:40 AM1 replyview on HN

You can prompt LLMs to scan thousands of documents to generate text validating your hunches. In some cases those validated hunches may even be correct.


Replies

Eisensteintoday at 1:01 PM

It's easy to get an LLM to make any argument you like based on whatever data is available. Those arguments are going to be trivially bad if that data is bad.