Ironically the low tech infringing proposal would lead to more reliable results grounded in the raw contents of the data, using less computing/power and without the confidently incorrect sycophanty we see from the LLMs.
Nah. It would just lead to more of classical search. Which is okay, as it always has been.
LLMs are not retrieval engines, and thinking them as such is missing most of their value. LLMs are understanding engines. Much like for humans, evaluating and incorporating knowledge is necessary to build understanding - however, perfect recall is not.
Another, arguably equivalent way of framing it: the job of an LLM isn't to provide you with the facts; it's main job is to understand what you mean. The "WIM" in "DWIM". Making it do that does require stupid amounts of data and tons of compute in training. Currently, there's no better way, and the only alternative system with similar capabilities are... humans.
IOW, it's not even an apples to oranges comparison, it's apples to gourmet chef.
Nah. It would just lead to more of classical search. Which is okay, as it always has been.
LLMs are not retrieval engines, and thinking them as such is missing most of their value. LLMs are understanding engines. Much like for humans, evaluating and incorporating knowledge is necessary to build understanding - however, perfect recall is not.
Another, arguably equivalent way of framing it: the job of an LLM isn't to provide you with the facts; it's main job is to understand what you mean. The "WIM" in "DWIM". Making it do that does require stupid amounts of data and tons of compute in training. Currently, there's no better way, and the only alternative system with similar capabilities are... humans.
IOW, it's not even an apples to oranges comparison, it's apples to gourmet chef.