logoalt Hacker News

codebolttoday at 7:35 AM3 repliesview on HN

> People aren't prompting LLMs to write good, maintainable code though.

Then they're not using the tools correctly. LLMs are capable of producing good clean code, but they need to be carefully instructed as to how.

I recently used Gemini to build my first Android app, and I have zero experience with Kotlin or most of the libraries (but I have done many years of enterprise Java in my career). When I started I first had a long discussion with the AI about how we should set up dependency injection, Material3 UI components, model-view architecture, Firebase, logging, etc and made a big Markdown file with a detailed architecture description. Then I let the agent mode implement the plan over several steps and with a lot of tweaking along the way. I've been quite happy with the result, the app works like a charm and the code is neatly structured and easy to jump into whenever I need to make changes. Finishing a project like this in a couple of dozen hours (especially being a complete newbie to the stack) simply would not have been possible 2-3 years ago.


Replies

ben_wtoday at 11:12 AM

> Then they're not using the tools correctly. LLMs are capable of producing good clean code, but they need to be carefully instructed as to how.

I'd argue that when the code is part of a press release or corporate blog post (is there even a difference?) by the company that the LLM in question comes from, e.g. Claude's C compiler, then one cannot reasonably assert they were "not using the tools correctly": even if there's some better way to use them, if even the LLM's own team don't know how to do that, the assumption should be that it is unreasonable to expect anyone else to how to do that either.

I find it interesting and useful to know that the boundary of the possible is a ~100kloc project, and that even then this scale of output comes with plenty of flaws.

Know what the AI can't do, rather than what it can. Even beyond LLMs, people don't generally (there's exceptions) get paid for manually performing tasks that have already been fully automated, people get paid for what automation can't do.

Moving target, of course. This time last year, my attempt to get an AI to write a compiler for a joke language didn't even result in the source code for the compiler itself compiling; now it not only compiles, it runs. But my new language is a joke language, no sane person would ever use it for a serious project.

onion2ktoday at 7:38 AM

When I started I first had a long discussion with the AI... and made a big Markdown file with a detailed architecture description.

Yep, that's how you get better output from AI. A lot of devs haven't learned that yet. They still see it as 'better autocomplete'.

show 2 replies
sirwittitoday at 7:47 AM

Not trying to be rude, but in a technology you're not familiar with you might not be able to know what good code is, and even less so if it's maintainable.

Finding and fixing that subtle, hard to reproduce bug that could kill your business after 3 years.

show 3 replies