logoalt Hacker News

T5Gemma 2: The next generation of encoder-decoder models

111 pointsby milomgyesterday at 7:48 PM20 commentsview on HN

Comments

minimaxiryesterday at 8:58 PM

> Note: we are not releasing any post-trained / IT checkpoints.

I get not trying to cannibalize Gemma, but that's weird. A 540M multimodel model that performs well on queries would be useful and "just post-train it yourself" is not always an option.

show 2 replies
killerstormtoday at 12:45 AM

They are comparing 1B Gemma to 1+1B T5Gemma 2. Obviously a model with twice more parameters can do more better. Says absolutely nothing about benefits of the architecture.

show 1 reply
o1inventortoday at 1:34 AM

> 128k context.

don't care. prove effective context length or gtfo.

davedxyesterday at 9:06 PM

What is an encoder-decoder model, is it some kind of LLM, or a subcomponent of an LLM?

show 4 replies
DoctorOetkeryesterday at 11:38 PM

What is the "X" in the pentagonal performance comparison, is it multilingual performance or something else?

potatoman22yesterday at 10:54 PM

What's the use case of models like T5 compared to decoder-only models like Gemma? More traditional ML/NLP tasks?

show 2 replies