logoalt Hacker News

f1shytoday at 12:52 PM1 replyview on HN

I agree with you, but, from the article: "The amount of training data doesn’t matter as much as we thought. Functional paradigms transfer well"

Anyway, I tend to think you are right, and the article is wrong in that sentence. (Or I misinterpreted something?)

I think both the quantity and quality of that has a big influence in the results.


Replies

kd0amgtoday at 1:41 PM

I took that to mean ≈ "Amount of training data isn't the big factor dwarfing all else." Depends who "we" refers to, I guess. Back when LLM-generated code was new, I definitely saw predictions that LLMs would struggle with niche or rarely used languages. These days, consensus among colleagues within earshot is that LLMs handle Rust much better than Python or C++ (corpus size and AutoCodeBench scores notwithstanding).