logoalt Hacker News

Nvidia Nemotron 3 Family of Models

240 pointsby ewt-nvlast Monday at 2:39 PM50 commentsview on HN

Comments

thoughtpeddlertoday at 8:47 PM

Is it fair to view this release as Nvidia strategically flexing that they can compete with their own customers in the model layer -- that they can be as vertically integrated as, say, GDM?

wcallahanlast Monday at 11:48 PM

I don’t do ‘evals’, but I do process billions of tokens every month, and I’ve found these small Nvidia models to be the best by far for their size currently.

As someone else mentioned, the GPT-OSS models are also quite good (though I haven’t found how to make them great yet, though I think they might age well like the Llama 3 models did and get better with time!).

But for a defined task, I’ve found task compliance, understanding, and tool call success rates to be some of the highest on these Nvidia models.

For example, I have a continuous job that evaluates if the data for a startup company on aVenture.vc could have overlapping/conflated two similar but unrelated companies for news articles, research details, investment rounds, etc… which is a token hungry ETL task! And I recently retested this workflow on the top 15 or so models today with <125b parameters, and the Nvidia models were among the best performing for this type of work, particularly around non-hallucination if given adequate grounding.

Also, re: cost - I run local inference on several machines that run continuously, in addition to routing through OpenRouter and the frontier providers, and was pleasantly surprised to find that if I’m a paying customer of OpenRouter otherwise, the free variant there from Nvidia is quite generous for limits, too.

show 5 replies
red2awnlast Monday at 10:23 PM

Very interesting release:

* Hybrid MoE: 2-3x faster than pure MoE transformers

* 1M context length

* Trained on NVFP4

* Open Source! Pretraining, mid-training, SFT and RL dataset released (SFT HF link is 404...)

* Open model training recipe (coming soon)

Really appreciate Nvidia being the most open lab but they really should make sure all the links/data are available on day 0.

Also interesting that the model is trained in NVFP4 but the inference weights are FP8.

show 1 reply
sosodevyesterday at 10:30 PM

I love how detailed and transparent the data set statistics are on the huggingface pages. https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B...

I've noticed that open models have made huge efficiency gains in the past several months. Some amount of that is explainable as architectural improvements but it seems quite obvious that a huge portion of the gains come from the heavy use of synthetic training data.

In this case roughly 33% of the training tokens are synthetically generated by a mix of other open weight models. I wonder if this trend is sustainable or if it might lead to model collapse as some have predicted. I suspect that the proliferation of synthetic data throughout open weight models has lead to a lot of the ChatGPT writing style replication (many bullet points, em dashes, it's not X but actually Y, etc).

dJLcnYfsE3today at 10:06 AM

I would say it is weird, that NVidia competes with own customers but looking back at "Founders Edition" cards maybe it isn't that weird at all. The better question probably is - with every big corporation having its own LLM, what exactly is OpenAI moat that would explain their valuation?

show 1 reply
ofermendtoday at 1:20 AM

We just evaluated Nemotron-3 for Vectara's hallucination leaderboard.

It scores at 9.6% hallucination rate, similar to qwen3-next-80b-a3b-thinking (9.3%) but of course it is much smaller.

https://github.com/vectara/hallucination-leaderboard

pants2last Monday at 5:56 PM

If it's intelligence + speed you want, nothing comes close to GPT-OSS-120B on Cerebras or Groq.

However, this looks like it has great potential for cost-effectiveness. As of today it's free to use over API on OpenRouter, so a bit unclear what it'll cost when it's not free, but free is free!

https://openrouter.ai/nvidia/nemotron-3-nano-30b-a3b:free

show 1 reply
max002yesterday at 8:55 AM

Im upvoting, im happy to finally see open source model with commercial use from Nvidia as most of the models ive been checking from you guys couldnt be used in commercial settings. Bravo Nvidia!

show 1 reply
omneitytoday at 5:35 PM

Nemotron now works on LM Studio if you update the runtime (from the settings > Runtime screen).

The default chat template is incorrect though and will fail but I published a corrected one you can replace it with: https://gist.github.com/omarkamali/a594b6cb07347f501babed489...

radarsat1today at 9:52 AM

I find it really interesting that it uses a Mamba hybrid with Transformers. Is it the only significant model right now using (at least partially) SSM layers? This must contribute to lower VRAM requirements right? Does it impact how KV caching works?

kristopolousyesterday at 10:40 PM

I was just using the embeddings model last night. Boy is it slow. Nice results but this 5090 isn't cutting it.

I'm guessing there's some sophistication in the instrumentation I'm just not up to date with.

DoctorOetkertoday at 1:18 AM

can it understand input in and generate output for different language tokens? does it know narrow IPA transcription of sentences in arbitrary languages?

kristianpyesterday at 9:19 PM

The article seem to focus on the nano model. Where are the details of the larger ones?

show 1 reply
jtbaylyyesterday at 9:33 PM

Any chance of running this nano model on my Mac?

show 5 replies
jonrosnertoday at 10:06 AM

after testing it for a little I am pretty disappointed. While I do get 90 token per second out of it from my M4 Pro which is more than enough for a real world use case, the quality is just not there. I gave it a codebase that it should analyze and answer me some questions and it started hallucinating right away. No replacement for a "real" coding agent - maybe for other agentic work like sorting emails though.

sosodevyesterday at 10:41 PM

The claim that a small, fast, and decently accurate model makes a good foundation for agentic workloads seems like a reasonable claim.

However, is cost the biggest limiting factor for agent adoption at this point? I would suspect that the much harder part is just creating an agent that yields meaningful results.

show 2 replies
Tepixtoday at 10:55 AM

Is it just me or is Nvidia trolling hard by calling a model with 30b parameters "nano"? With a bit of context, it doesn't even fit on a RTX 5090.

Other LLMs with the "nano" moniker are around 1b parameters or less.

Y_Ylast Monday at 5:39 PM

Wow, Nvidia keepson pushing the frontier of misleading benchmarks

show 1 reply