Everyone was dumping on google when OpenAI first launched ChatGPT for playing it too safe and falling behind on cool new tech. Now everyone's upset LLMs are hallucinating and say they shouldn't launch things until proven safe.
This is such a weird argument. People were dumping on google because they had already been working on LLM's and machine learning products for quite a few years before Chatgpt released their first public model, and with how much capital and talent google has it was justified criticism how they now also fell behind this piece of new technology. This was before we knew the extent of hallucinations and were still under the impression that they would soon be solved. Now we can clearly see that hallucinations are a problem, especially when you're summarizing information on top of a search query, which yes, they should at least be careful about when releasing, so things like this issue don't happen.
This is such a weird argument. People were dumping on google because they had already been working on LLM's and machine learning products for quite a few years before Chatgpt released their first public model, and with how much capital and talent google has it was justified criticism how they now also fell behind this piece of new technology. This was before we knew the extent of hallucinations and were still under the impression that they would soon be solved. Now we can clearly see that hallucinations are a problem, especially when you're summarizing information on top of a search query, which yes, they should at least be careful about when releasing, so things like this issue don't happen.