logoalt Hacker News

vichletoday at 8:30 AM2 repliesview on HN

What type of hardware do I need to run a small model like this? I don't do Apple.


Replies

bodegajedtoday at 8:44 AM

1.5B models can run on CPU inference at around 12 tokens per second if I remember correctly.

show 1 reply
jychangtoday at 8:33 AM

1.54GB model? You can run this on a raspberry pi.

show 1 reply