logoalt Hacker News

Kimi K1.5: Scaling Reinforcement Learning with LLMs

198 pointsby noch01/21/202529 commentsview on HN

Comments

NitpickLawyer01/21/2025

Really unfortunate timing with Deepseek-R1 and the distills coming out at basically the same time. Hard for people to pay attention to, and plus open source > API, even if the results are a bit lower.

show 1 reply
zurfer01/21/2025

Is it fair to say that 2 of the 3 leading models are from Chinese labs? It's really incredible how fast China has caught up.

show 1 reply
asah01/21/2025

The set of math/logic problems behind AIME 2024 appears to be... https://artofproblemsolving.com/wiki/index.php/2024_AIME_I_P...

Impressive stuff! But unclear to me if it's literally just these 15 or if there's a large problem set...

show 2 replies
joaohkfaria01/21/2025

But wait, which LLM models were used to train Kimi? It wasn't clear on the report.

show 1 reply
cuuupid01/21/2025

I really, really dislike when companies use GitHub to promote their product by posting a "research paper" and a code sample.

It's not even an SDK, library, etc., it's just advertising.

I've noticed a number of China-based labs do this; they will often post a really cool demo, some images, and then either an API or just nothing except advertising for their company (e.g. model may not even exist). Often they will also promise in some GitHub issue that they will release the weights, and never do.

I'd love to see some sort of study here, I wonder what % of "omg really cool AI model!!!" hype papers [1] never provide an API, [2] cannot be reproduced at all, and/or [3] promise but never provide weights. If this was any other field, academics would be up in arms about likely fraud, false advertising, etc.

show 7 replies
abubakkarth01/21/2025

[dead]

beredis01/21/2025

[flagged]