logoalt Hacker News

Improving Composer through real-time RL

77 pointsby ingvelast Thursday at 4:48 PM19 commentsview on HN

Comments

hmartintoday at 3:01 AM

Step 1: take an open source model with zero acknowledgement.

Step 2: build on someone else's infrastructure innovations with zero acknowledgement.

Step 3: Write a blog post with "unprecedented" and "100x" and "trillions" in the first paragraph.

Seriously, this seems like cool work and enjoyed the post. But my basic trust in them has completely tanked.

show 1 reply
CitrusFruitstoday at 12:05 AM

I've been wondering how they've been able to be so generous with Composer usage with it still making business sense. Seems like this is the answer: presumably they think they'll have a competitive advantage in not just the UX space but the model space as well soon. It's a great strategy, but I do wonder if the moat will be big enough with how fast things are moving and how competitive the model landscape is.

show 1 reply
pillsburycattoday at 3:39 AM

Important disclaimer for anyone using Cursor: make sure to disable "data sharing" in your account settings, as it is enabled by default and old accounts are automatically opted into it.

show 1 reply
crazyloggertoday at 2:30 AM

This feels so wrong. the LLM should play the role of a very general (but empty & un-opinionated) brain - you don’t want to perform a coding-specific lobotomy on someone every day. The proper target of their RL should have been their harness. That’s what determines the agent's trajectory as much as the base model.

I also wonder since they’re doing constant RL on model weights with today's Cursor design, does that mean they can never change their system prompt & other parts of the harness?

1) Comparison between past trajectories data would be meaningless if they were operating under different instructions.

2) Performance will be terrible the next time they change their tool design, since the model is now "opinionated" based on how a previous version of Cursor was designed.

Anthropic is more sensible with their “constitution” approach to safety. The behaviors (and ultimately the values) you want your model to follow should be a document, not a lobotomy.

kgeisttoday at 1:09 AM

>We used a Kimi base, with midtraining and RL on top. Going forward, we'll include the base used in our blog posts, that was a miss. Also, the license is through Fireworks. [0]

And still no mention of Kimi in a new blog post :)

Also apparently the inference provider they use, Fireworks AI, already has built-in API for RL tuning Kimi [1], so I wonder which parts are Cursor's own effort and where Fireworks AI actually deserves credit, especially since they repeatedly brag about being able to create a new checkpoint every 5 hours, which would be largely thanks to Fireworks AI's API/training infrastructure.

I mean, I'm genuinely curious how much effort it would actually take me to go from "here, lots of user data" to "the model gains +1% on benchmarks" to produce my own finetune, assuming I already use a good existing foundational model, my inference provider already handles all the tuning infrastructure/logic, and I already have a lot of usage logs.

[0] https://news.ycombinator.com/item?id=47459529

[1] https://fireworks.ai/blog/kimi-k2p5

show 1 reply
vicchenaitoday at 4:08 AM

the rl loop here is clever but i wonder how the reward signal degrades over time. if you're optimizing for user acceptance of suggestions, you're inevitably training on a mix of "this was actually correct" and "i accepted because editing the suggestion was more work than accepting it." that second case creates a subtle bias toward suggestions that are close-enough-to-not-bother-fixing rather than actually correct.

also curious whether they see different convergence patterns across languages. my gut says something like python where theres more stylistic variation would be harder to get a clean reward signal vs something like rust where there are fewer idiomatic ways to do things.

janalsncmtoday at 1:47 AM

Back in my day we called this real time training from implicit user feedback.

The engineering challenge here is an order of magnitude bigger though. An LLM is orders of magnitude bigger than a recommender system model. Kudos.

htrptoday at 3:06 AM

If the model "improves" every 5 hours, how do you have any guarantee of model consistency across long coding sessions?

show 1 reply
fzysingularitytoday at 1:27 AM

Real-time or continuous learning is great on paper, but to get this to work without extremely expensive regression testing and catastrophic forgetting is a real challenge.

Credit to the team for taking this on, but I’d be skeptical of announcements like this without at least 3–6 months of proven production deployments. Definitely curious how this plays out.

show 1 reply
amazingamazingtoday at 1:57 AM

seems expensive. distillation is inherently impossible to defend against. sit back and let your competitors do the hard work. they'll whine and say it's illegal, but they shouldn't complain, they will reap what they sowed.

polishdude20today at 12:02 AM

I'd love to see some data for how much it has improved via this process in the last week

show 1 reply
alcor-ztoday at 1:36 AM

[dead]

ax3726today at 1:20 AM

[dead]

gurachektoday at 1:58 AM

[dead]

nimchimpskytoday at 2:20 AM

[dead]

g3dartoday at 4:58 AM

[flagged]