logoalt Hacker News

Ask HN: How can I get better at using AI for programming?

292 pointsby lemonlime227yesterday at 3:37 PM316 commentsview on HN

I've been working on a personal project recently, rewriting an old jQuery + Django project into SvelteKit. The main work is translating the UI templates into idiomatic SvelteKit while maintaining the original styling. This includes things like using semantic HTML instead of div-spamming, not wrapping divs in divs in divs, and replacing bootstrap with minimal tailwind. It also includes some logic refactors, to maintain the original functionality but rewritten to avoid years of code debt. Things like replacing templates using boolean flags for multiple views with composable Svelte components.

I've had a fairly steady process for doing this: look at each route defined in Django, build out my `+page.server.ts`, and then split each major section of the page into a Svelte component with a matching Storybook story. It takes a lot of time to do this, since I have to ensure I'm not just copying the template but rather recreating it in a more idiomatic style.

This kind of work seems like a great use case for AI assisted programming, but I've failed to use it effectively. At most, I can only get Claude Code to recreate some slightly less spaghetti code in Svelte. Simple prompting just isn't able to get AI's code quality within 90% of what I'd write by hand. Ideally, AI could get it's code to something I could review manually in 15-20 minutes, which would massively speed up the time spent on this project (right now it takes me 1-2 hours to properly translate a route).

Do you guys have tips or suggestions on how to improve my efficiency and code quality with AI?


Comments

michelsedghyesterday at 8:02 PM

I think you shouldn't think so much about it, the more you use it, the better you will understand how it can help you. The most gain will be coming from the models jumping and how you get updated using the best for your use case.

Galoriousyesterday at 8:12 PM

Did you use the /init command in Claude Code at the start?

That builds the main claude.md file. If you don’t have that file CC starts each new session completely oblivious to your project like a blank slate.

noivyesterday at 6:29 PM

I learned the hard way, when Claude has 2 conflicting information in Claude.md it tends to ignore both. So, precise language is key, don't use terms like 'object', which may have different meanings in different fields.

helterskelteryesterday at 5:36 PM

I like to followup with "Does this make sense?" or similar. This gets it to restate the problem in its own words, which not only shows you what its understanding of the problem is, it also seems to help reinforce the prompt.

owlninjayesterday at 5:50 PM

Would love to hear any feedback using Google's anitgravity from a clean slate. Holiday shutdown is about to start at my job and I want to tinker with something that I have not even started.

cardanomeyesterday at 8:20 PM

Honestly if your boss does not force you to use AI, don't.

Don't feel like you might get "left behind". LLM assisted development is still changing rapidly. What was best practice 6 months ago is irrelevant today. By being an early adopter you will just learn useless workarounds that might soon not be necessary to know.

On the other hand if you keep coding "by hand" will keep your skills sharp. You will protect yourself against the negative mental effects of using LLMs like skill decline, general decline of mental capacity, danger of developing psychosis because of the sycophantic nature of LLMs and so on.

LLM based coding tools are only getting easier to use and if you actually know how to code and know software architecture you will able to easily integrate LLM based workflows and deliver far superior results compared to someone who spend their years vibe coding, even if you picked up Claude Code or whatever just a month ago. No need for FOMO,

show 1 reply
BobbyTables2yesterday at 11:26 PM

Don’t!

show 1 reply
salutonmundotoday at 3:35 AM

walk into the woods and never touch a computer again

KronisLVtoday at 12:13 AM

Automated code checks. Either custom linter rules (like ESLint) or prebuild scripts to enforce whatever architectural or style rules you want, basically all of the stuff that you'd normally flag in code review that can be codified into an automatic check but hasn't been before due to developers either not finding it worth their time to do it, or not having enough time or skill to do that - use the AI to write as many of these as needed, just like:

  node prebuild/prebuild.cjs
which will then run all the other checks you've defined like:

  prebuild/ensure-router-routes-reference-views-not-regular-components.cjs
  prebuild/ensure-custom-components-used-instead-of-plain-html.cjs
  prebuild/ensure-branded-colors-used-instead-of-tailwind-ones.cjs
  prebuild/ensure-eslint-disable-rules-have-explanations.cjs
  prebuild/ensure-no-unused-translation-strings-present.cjs
  prebuild/ensure-pinia-stores-use-setup-store-format.cjs
  prebuild/ensure-resources-only-called-in-pinia-stores.cjs
  prebuild/ensure-api-client-only-imported-in-resource-files.cjs
  prebuild/ensure-component-import-name-matches-filename.cjs
  prebuild/disallow-deep-component-nesting.cjs
  prebuild/disallow-long-source-files.cjs
  prebuild/disallow-todo-comments-without-jira-issue.cjs
  ...
and so on. You might have tens of these over the years of working on a project, plus you can write them for most things that you'd conceivably want in "good code". Examples above are closer to a Vue codebase but the same principles apply to most other types of projects out there - many of those would already be served by something like ESLint (you probably want the recommended preset for whatever ruleset exists for the stack you work with), some you'll definitely want to write yourself. And that is useful regardless of whether you even use AI or not, so that by the time code is seen by the person doing the review, hopefully all of those checks already pass.

If "good code" is far too nebulous of a term to codify like that, then you have a way different and frankly more complex problem on your hands. If there is stuff that the AI constantly gets wrong, you can use CLAUDE.md as suggested elsewhere or even better - add prebuild script rules specifically for it.

Also, a tech stack with typing helps a bunch - making wrong code harder to even compile/deploy. Like, with TypeScript you get npm run type-check (tsc) and that's frankly lovely to be able to do, before you even start thinking about test coverage. Ofc you still should have tests that check the functionality of what you've made too, as usual.

robertpiosikyesterday at 10:09 PM

Try a free and open-source VS Code plugin "Code Web Chat".

orwinyesterday at 6:44 PM

I want to say a lot of mean things, because an extremely shitty, useless, clearly Claude-generated test suite passed the team PR review this week, tests were useless, so useless the code they were linked to (can't say if the code itself was Ai-written though) had a race condition, that, if triggered and used correctly, could probably rewrite the last entry of any of the firewall we manage (DENY ALL is the one I'm afraid about).

But I can't even shit on Claude AI, because I used it to rewrite part of the tests, and analyse the solution to fix the race condition (and how to test it).

It's a good tool, but in the last few weeks I've been more and more mad about it.

Anyway. I use it to generate a shell. No logic inside, just data models, and functions prototypes. That help with my inability to start something new. Then I use it to write easy functions. Helpers I know I'll need. Then I try to tie everything together. I never hesitate to stop Claude and write specific stuff myself, add a new prototype/function, or delete code. I restart the context often (Opus is less bad about it, but still). Then I ask it about easy refactoring or library that would simplify the code. Ask for multiple solutions each time.

daxfohlyesterday at 8:41 PM

For your task, instead of a direct translation, try adding a "distillation" step in between. Have it take the raw format and distill the important parts to yaml or whatever, then take the distillation and translate that into the new format. That way you can muck with the yaml by hand before translating it back, which should make it easier to keep the intent without the spaghetti getting in the way. Then you can hand-wire any "complexities" into the resulting new code by hand, avoiding the slop it would more likely create.

It may even be worth having it write a parser/evaluator that does these steps in a deterministic fashion. Probably won't work, but maybe worth a shot. So long as it does each translation as a separate step, maybe at least one of them will end up working well enough, and that'll be a huge time saver for that particular task.

Alan01252yesterday at 5:01 PM

I've been heavily vibe coding for a couple of personal projects. A free kids typing game and bringing back a multiplayer game I played a lot as a kid back to life both with pretty good success.

Things I personally find work well.

1. Chat through with the AI first the feature you want to build. In codex using vscode I always switch to chat mode, talk through what I am trying to achieve and then once myself and the AI are in "agreement" switch to agent mode. Google's antigravity sort of does this by default and I think it's probably the correct paradigm to use.

2. Get the basics right first. It's easy for the AI to produce a load of slop, but using my experience of development I feel I am (sort of) able to guide the AI in advance in a similar way to how I would coach junior developers.

3. Get the AI to write tests first. BDD seems to work really well for AI. The multiplayer game I was building seemed to regress frequently with just unit tests alone, but when I threw cucumber into the mix things suddenly got a lot more stable.

4. Practice, the more I use AI the more I believe prompting is a skill in itself. It takes time to learn how to get the best out of an Agent.

What I love about AI is the time it gives me to create these things. I'd never been able to do this before and I find it very rewarding seeing my "work" being used by my kids and fellow nostalgia driven gamers.

show 1 reply
bpavukyesterday at 11:59 PM

first off, drop the idea of coding "agents" entirely. semi-async death valley is not worth it, you will never get into the flow state with an "agent" that takes less than an hour to spin, and we did not learn how to make true async agents that run for this long while maintaining coherence yet. OpenAI is the closest in that regard, but they are still at a 20-minute mark, so I am not dropping the quotes for now.

another argument against letting LLM do the bulk of the job is that they output code that's already legacy, and you want to avoid tech debt. for example, Gemini still thinks that Kotlin 2.2 is not out, hence misses out on context parameters and latest Swift interoperability goodies. you, a human being, are the only one who will ever have the privilege of learning "at test time", without separate training process.

replace coding "agents" with search tools. they are still non-deterministic, but hey, both Perplexity and Google AI Mode are good at quick lookup of SvelteKit idioms and whatnot. plus, good old Lighthouse can point out a11y issues - most of them stem from non-semantic HTML. but if you really want to do it without leaving a terminal, I can recommend Gemini CLI with some search-specific prompting. it's the only CLI "agent" that has access to the web search to my knowledge. it's slower than Perplexity or even ChatGPT Search, but you can attach anything as a context.

this is the true skill of "how to use AI" - only use it where it's worth it. and let's be real, if Google Search was not filled with SEO crap, we would not need LLMs.

sisciayesterday at 8:49 PM

I will be crucified by this, but I think you are doing it wrong.

I would split it in 2 steps.

First, just move it to svelte, maintain the same functionality and ideally wrap it into some tests. As mentioned you want something that can be used as pass/no-pass filter. As in yes, the code did not change the functionality.

Then, apply another pass from Svelte bad quality to Svelte good quality. Here the trick is that "good quality" is quite different and subjective. I found the models not quite able to grasp what "good quality" means in a codebase.

For the second pass, ideally you would feed an example of good modules in your codebase to follow and a description of what you think it is important.

ipunchghostsyesterday at 6:12 PM

Ask people to do things for you. Then you will learn how to work with something/someone who has faults but can overall be useful if you know how to view the interaction.

show 1 reply
thinkingtoiletyesterday at 5:59 PM

There are very real limitations on AI coders in their current state. They simply do not produce great code most of the time. I have to review every line that it generates.

3videncetoday at 12:18 AM

This isn't exactly an answer to your question but I've experienced some efficiency gains in using AI agents for pre-reviewing my PRs and getting it to create tests.

You still get to maintain the core code and maintain understandability but it helps with the tasks the take time that aren't super interesting.

j45yesterday at 8:32 PM

Follow and learn from peopel on youtube who formerly had the same skill level as you did now.

cat_plus_plusyesterday at 6:27 PM

AI is great at pattern matching. Set up project instructions that give several examples of old code, new code and detailed explanations of choices made. Also add a negative prompt, a list of things you do not want AI to do based on past frustrations.

dominotwyesterday at 6:18 PM

dont forget to include "pls don't make mistakes"

seg_lolyesterday at 6:08 PM

Voice prompts, restate what you want, how you want it from multiple vantage points. Each one is a light cone in a high dimensional space, your answer lies in their intersection.

Use mind altering drugs. Give yourself arbitrary artificial constraints.

Try using it in as many different ridiculous ways you can. I am getting the feeling you are only trying one method.

> I've had a fairly steady process for doing this: look at each route defined in Django, build out my `+page.server.ts`, and then split each major section of the page into a Svelte component with a matching Storybook story. It takes a lot of time to do this, since I have to ensure I'm not just copying the template but rather recreating it in a more idiomatic style.

Relinquish control.

Also, if you have very particular ways of doing things, give it samples of before and after (your fixed output) and why. You can use multishot prompting to train it to get the output you want. Have it machine check the generated output.

> Simple prompting just isn't able to get AI's code quality within 90%

Would simple instructions to a person work? Esp a person trained on everything in the universe? LLMs are clay, you have to mold them into something useful before you can use them.

morkalorkyesterday at 5:26 PM

In addition to what the sibling commenters are saying: Set up guardrails for what you expect in your project's documentation. What is the agent allowed to do when writing unit tests vs say functional tests, what packages it should never use, coding and style templates etc.

dborehamyesterday at 5:21 PM

1. Introduce it to the code base (tell it: we're going to work on this project, project does X is written in language Y). Ask it to look at the project to familiarize.

2. Tell it you want to refactor the code to achieve goal Z. Tell it to take a look and tell you how it will approach this. Consider showing it one example refactor you've already done (before and after).

3. Ask it to refactor one thing (only) and let you look at what it did.

4. Course correct if it didn't do the right thing.

5 Repeat.

halfcatyesterday at 6:15 PM

> prompting just isn't able to get AI's code quality within 90% of what I'd write by hand

Tale as old as time. The expert gets promoted to manager, and the replacement worker can’t deliver even 90% of what the manager used to. Often more like 30% at first, because even if they’re good, they lack years of context.

AI doesn’t change that. You still have to figure out how to get 5 workers who can do 30-70% of what you can do, to get more than 100% of your output.

There are two paths:

1. Externalized speed: be a great manager, accept a surface level understanding, delegate aggressively, optimize for output

2. Internalized speed: be a great individual contributor, build a deep, precise mental model, build correct guardrails and convention (because you understand the problem) and protect those boundaries ruthlessly, optimize for future change, move fast because there are fewer surprises

Only 1 is well suited for agent-like AI building. If 2 is you, you’re probably better off chatting to understand and build it yourself (mostly).

At least early on. Later, if you nail 2 and have a strong convention for AI to follow, I suspect you may be able to go faster. But it’s like building the railroad tracks before other people can use them to transport more efficiently.

Django itself is a great example of building a good convention. It’s just Python but it’s a set of rules everyone can follow. Even then, path 2 looks more like you building out the skeleton and scaffolding. You define how you structure Django apps in the project, how you handle cross-app concerns, like are you going to allow cross-app foreign keys in your models? Are you going to use newer features like generated fields (that tend to cause more obscure error messages in my experience)?

Here’s how I think of it. If I’m building a Django project, the settings.py file is going to be a clean masterpiece. There are specific reasons I’m going to put things in the same app, or separate apps. As soon as someone submits a PR that craps all over the convention I’ve laid out, I’m rejecting aggressively. If we’ve built the railroad tracks, and the next person decides the next set of tracks can use balsa wood for the railroad ties, you can’t accept that.

But generally people let their agent make whatever change it makes and then wonder why trains are flying off the tracks.

show 2 replies
swatcoderyesterday at 8:22 PM

> This kind of work seems like a great use case for AI assisted programming

Always check your assumptions!

You might be thinking of it as a good task because it seems like some kind of translation of words from one language to another, and that's one of the classes of language transformations that LLM's can do a better job at than any prior automated tool.

And when we're talking about an LLM translating the gist of some English prose to French, for a human to critically interpret in an informal setting (i.e not something like diplomacy or law or poetry), it can work pretty well. LLM's introduce errors when doing this kind of thing, but the broader context of how the target prose is being used is very forgiving to those kinds of errors. The human reader can generally discount what doesn't make sense, redundancy across statements of the prose can reduce ambiguity or give insight to intent, the reader may be able to interactively probe for clarifications or validations, the stakes are intentionally low, etc

And for some kinds of code-to-code transforms, code-focused LLM's can make this work okay too. But here, you need a broader context that's either very forgiving (like the prose translation) or that's automatically verifiable, so that the LLM can work its way to the right transform through iteration.

But the transform you're trying to do doesn't easily satisfy either of those contexts. You have very strict structural, layout, and design expectations that you want to replicate in the later work and even small "mistranslations" will be visually or sometimes even functionally intolerable. And without something like a graphic or DOM snapshot to verify the output with, you can't aim for the iterative approach very effectively.

TLDR; what you're trying to do is not inherently a great use case. It's actually a poor one that can maybe be made workable through expert handling of the tool. That's why you've been finding it difficult and unnatural.

If your ultimate goal is to improve your expertise with LLM's so that you can apply them to challenging use cases like this, then it's a good learning opportunity for you and a lot of the advice in other comments is great. The most key factor being to have some kind of test goal that the tool can use for verify its work until it strikes gold.

On the other hand, if your ultimate goal is to just get your rewrite done efficiently and its not an enormous volume of code, you probably just want to do it yourself or find one of our many now-underemployed humans to help you. Without expertise that you don't yet have, and some non-trivial overhead of preparatory labor (for making verification targets), the tool is not well-suited to the work.

bgwalteryesterday at 9:31 PM

Hey, I am bgwalter from the anti-AI industrial complex, which is a $10 trillion industry with a strong lobby in DC.

I would advise you to use Natural Intelligence, which will be in higher demand after the bubble has burst completely (first steps were achieved by Oracle this week).

dogg0braintoday at 5:29 AM

[dead]

JackSlateuryesterday at 8:08 PM

You can using a single simple step: don't

The more you use IA, the more your abilities decreases, the less you are able to use IA

This is the law of cheese: the more cheese, the more holes; The more holes, the less cheese; Thus, the more cheese, the less cheese;

show 1 reply
thyb23today at 1:25 AM

[dead]

cAsSlopelatoday at 4:33 AM

[dead]

cAsSlopelatoday at 4:33 AM

[dead]

sora2videotoday at 4:04 AM

[dead]