logoalt Hacker News

shivampkumartoday at 4:45 AM1 replyview on HN

The model needed about 15GB at peak during generation - the 4B model loads multiple sub-models (1.3B each for shape and texture flow). 8GB won't be enough, but both 24GB and 32GB both should be fine.


Replies

post-ittoday at 5:07 AM

Thanks! Could it conceivably load the sub-models in series rather than parallel? 8 still won't be enough but I wonder if those with 16 could eke something out.