logoalt Hacker News

agumonkeyyesterday at 10:06 AM1 replyview on HN

There's videos about Diffusion LLMs too, apparently getting rid of the linear token generation. But I'm no ML engineer.


Replies

nephanthyesterday at 12:52 PM

As someone who worked on transformer-based diffusion models before (not for language though), i can say one thing: they're hard.

Denoising diffusion models benefited a lot from the u-net, which is a pretty simple network (compared to a transformer) and very well-adapted to the denoising task. Plus diffusion on images is great to research because it's very easy to visualize, and therefore to wrap your head around

Doing diffusion on text is a great idea, but my intuition is it will prove more challenging, and probably take a while before we get something working

show 1 reply