Is this really the frontier of LLM research? I guess we really aren't getting AGI any time soon, then. It makes me a little less worried about the future, honestly.
Edit: I never actually expected AGI from LLMs. That was snark. I just think it's notable that the fundamental gains in LLM performance seem to have dried up.
Is a random paper from Fujitsu Research claiming to be the frontier of anything?
LLMs are not on the road to AGI, but there are plenty of dangers associated with them nonetheless.
I'm not following this either. You'd think this would be frontier back in 2023
That and LLMs are seemingly plateauing. Earlier this year, it seemed like the big companies were releasing noticeable improvements every other week. People would joke a few weeks is “an eternity” in AI…so what time span are we looking at now?
just because it’s on arxiv doesn’t mean anything
arxiv is essentially a blog under an academic format, popular amongst asian and south asian academic communities
currently you can launder reputation with it, just like “white papers” in the crypto world allowed for capital for some time
this ability will diminish as more people catch on
I'm starting to think that there will not be an 'AGI' moment, we will simply slowly build smarter machines over time until we realize there is 'AGI'. It would be like video calls in the '90s everybody wanted them, now everybody hates them, lmao.
First, I don't think we will ever get to AGI. Not because we won't see huge advances still, but AGI is a moving ambiguous target that we won't get consensus on.
But why does this paper impact your thinking on it? It is about budget and recognizing that different LLMs have different cost structures. It's not really an attempt to improve LLM performance measured absolutely.