> LLM's are notoriously bad at this. The noise to signal ratio is unacceptably high
I could go either way on the future of this, but if you take the argument that we're still early days, this may not hold. They're notoriously bad at this so far.
We could still be in the PC DOS 3.X era in this timeline. Wait until we hit the Windows 3.1, or 95 equivalent. Personally, I have seen shocking improvements in the past 3 months with the latest models.
> We could still be in the PC DOS 3.X era in this timeline. Wait until we hit the Windows 3.1, or 95 equivalent. Personally, I have seen shocking improvements in the past 3 months with the latest models.
While we're speculating, here's mine: we're in the Windows 7 phase of AI.
IOW, everything from this point on might be better tech, but is going to be worse in practice.
I would like to see the day when the context size is in gigabytes or tens of billions of tokens, not RAG or whatever, actual context.
First impressions are everything. It's going to be hard to claw back good will without a complete branding change. But... where do you go from 'AI'???
Personally I strongly doubt it. Since the nature of LLM's does not allow them semantic content or context, I believe it is inherently a tool unsuited for this task. As far as I can tell, it's a limitation of the technology itself, not of the amount of power behind it.
Either way, being able to generate or compress loads of text very quickly with no understanding of the contents simply is not the bottleneck of information transfer between human beings.