logoalt Hacker News

floppiplopp01/21/20251 replyview on HN

I'm at this very moment testing deepseek-r1, a so called "reasoning" llm, on the excellent "rustlings" tutorial. It is well documented and its solutions are readily available online. It is my lazy go-to-testing for coding tasks to assess for me if and when I have to start looking for a new job and take up software engineering as a hobby. The reason I test with rustlings is to also assess the value as a learning tool for students and future colleagues. Maybe these things have use as a teacher? Also, the rust compiler is really good in offering advice, so there's an excellent baseline to compare the llm-output.

And well, let me put it this way: deepseek-r1 won't be replacing anyone anytime soon. It generates a massive amount of texts, mostly nonsensical and almost always terribly, horribly wrong. But inexperienced devs or beginners, especially beginners, will be confused and will be led down the wrong path, potentially outsourcing rational thought to something that just sounds good, but actually isn't.

Currently, over-reliance on the ramblings of a statistical model seems detrimental to education and ultimately the performance of future devs. As the probably last generation of old school software engineers, who were trained on coffee and tears of frustration, who had to really work code and architecture themselves, golden times might lie ahead, because someone will have to fix the garbage produced en masse by llms.


Replies

diggan01/21/2025

> And well, let me put it this way: deepseek-r1 won't be replacing anyone anytime soon. It generates a massive amount of texts, mostly nonsensical and almost always terribly, horribly wrong. But inexperienced devs or beginners, especially beginners, will be confused and will be led down the wrong path, potentially outsourcing rational thought to something that just sounds good, but actually isn't.

Are you considering the full "reasoning" it does when you're saying this? AFAIK, they're meant to be "rambling" like that, exploring all sorts of avenues and paths before reaching a final conclusive answer that is still "ramble-like". I think the purpose seems to be to layer something on top that can finalize the answer, rather than just taking whatever you get from that and use it as-is.

> Currently, over-reliance on the ramblings of a statistical model seems detrimental to education and ultimately the performance of future devs. As the probably last generation of old school software engineers, who were trained on coffee and tears of frustration, who had to really work code and architecture themselves, golden times might lie ahead, because someone will have to fix the garbage produced en masse by llms.

I started coding just before Stack Overflow got popular, and remember the craze when it did get popular. Blogposts about how Stack Overflow will create lazy devs was all over the place, people saying it was the end of the real developer. Not arguing against you or anything, I just find it interesting how sentiments like these keeps repeating over time, just minor details that change.