logoalt Hacker News

2ndorderthoughttoday at 12:06 PM1 replyview on HN

I tried the Gemma 4 I think 2 and 4b. The 2b was not useful for me at all. A little too weak for my use cases

The 4b was okay. It didn't get all of my small math questions right, it didn't know about some of the libraries I use, but it was able to do some basic auto complete type stuff. For microscopic models I like the llama 3.2 3b more right now for what I do, it's a little faster and seems a little stronger for what I do. But everyone is different and I don't think I'll use it anymore this past month has been crazy for local model releases.


Replies

throwaw12today at 12:35 PM

can you share your use cases for 2b and 4b models?

curious how people are leveraging these models

show 1 reply