logoalt Hacker News

Ask HN: How can I get better at using AI for programming?

211 pointsby lemonlime227yesterday at 3:37 PM246 commentsview on HN

I've been working on a personal project recently, rewriting an old jQuery + Django project into SvelteKit. The main work is translating the UI templates into idiomatic SvelteKit while maintaining the original styling. This includes things like using semantic HTML instead of div-spamming, not wrapping divs in divs in divs, and replacing bootstrap with minimal tailwind. It also includes some logic refactors, to maintain the original functionality but rewritten to avoid years of code debt. Things like replacing templates using boolean flags for multiple views with composable Svelte components.

I've had a fairly steady process for doing this: look at each route defined in Django, build out my `+page.server.ts`, and then split each major section of the page into a Svelte component with a matching Storybook story. It takes a lot of time to do this, since I have to ensure I'm not just copying the template but rather recreating it in a more idiomatic style.

This kind of work seems like a great use case for AI assisted programming, but I've failed to use it effectively. At most, I can only get Claude Code to recreate some slightly less spaghetti code in Svelte. Simple prompting just isn't able to get AI's code quality within 90% of what I'd write by hand. Ideally, AI could get it's code to something I could review manually in 15-20 minutes, which would massively speed up the time spent on this project (right now it takes me 1-2 hours to properly translate a route).

Do you guys have tips or suggestions on how to improve my efficiency and code quality with AI?


Comments

bchernyyesterday at 6:15 PM

Hey, Boris from the Claude Code team here. A few tips:

1. If there is anything Claude tends to repeatedly get wrong, not understand, or spend lots of tokens on, put it in your CLAUDE.md. Claude automatically reads this file and it’s a great way to avoid repeating yourself. I add to my team’s CLAUDE.md multiple times a week.

2. Use Plan mode (press shift-tab 2x). Go back and forth with Claude until you like the plan before you let Claude execute. This easily 2-3x’s results for harder tasks.

3. Give the model a way to check its work. For svelte, consider using the Puppeteer MCP server and tell Claude to check its work in the browser. This is another 2-3x.

4. Use Opus 4.5. It’s a step change from Sonnet 4.5 and earlier models.

Hope that helps!

show 18 replies
bogtogyesterday at 5:13 PM

Using voice transcription is nice for fully expressing what you want, so the model doesn't need to make guesses. I'm often voicing 500-word prompts. If you talk in a winding way that looks awkward when in text, that's fine. The model will almost certainly be able to tell what you mean. Using voice-to-text is my biggest suggestion for people who want to use AI for programming

(I'm not a particularly slow typer. I can go 70-90 WPM on a typing test. However, this speed drops quickly when I need to also think about what I'm saying. Typing that fast is also kinda tiring, whereas talking/thinking at 100-120 WPM feels comfortable. In general, I think just this lowered friction makes me much more willing to fully describe what I want)

You can also ask it, "do you have any questions?" I find that saying "if you have any questions, ask me, otherwise go ahead and build this" rarely produces questions for me. However, if I say "Make a plan and ask me any questions you may have" then it usually has a few questions

I've also found a lot of success when I tell Claude Code to emulate on some specific piece of code I've previously written, either within the same project or something I've pasted in

show 8 replies
theahurayesterday at 10:55 PM

Soft plug: take a look at https://github.com/tilework-tech/nori-profiles

I've spent the last ~4 months figuring out how to make coding agents better, and it's really paid off. The configs at the link above make claude code significantly better, passively. It's a one-shot install, and it may just be able to one-shot your problem, because it does the hard work of 'knowing how to use the agents' for you. Would love to know if you try it out and have any feedback.

(In case anyone is curious, I wrote about these configs and how they work here: https://12gramsofcarbon.com/p/averaging-10-prs-a-day-with-cl...

and I used those configs to get to the top of HN with SpaceJam here: https://news.ycombinator.com/item?id=46193412)

show 1 reply
Frannkyyesterday at 5:10 PM

I see LLMs as searchers with the ability to change the data a little and stay in a valid space. If you think of them like searchers, it becomes automatic to make the search easy (small context, small precise questions), and you won't keep trying again and again if the code isn't working(no data in the training). Also, you will realize that if a language is not well represented in the training data, they may not work well.

The more specific and concise you are, the easier it will be for the searcher. Also, the less modification, the better, because the more you try to move away from the data in the training set, the higher the probability of errors.

I would do it like this:

1. Open the project in Zed 2. Add the Gemini CLI, Qwen code, or Claude to the agent system (use Gemini or Qwen if you want to do it for free, or Claude if you want to pay for it) 3. Ask it to correct a file (if the files are huge, it might be better to split them first) 4. Test if it works 5. If not, try feeding the file and the request to Grok or Gemini 3 Chat 6. If nothing works, do it manually

If instead you want to start something new, one-shot prompting can work pretty well, even for large tasks, if the data is in the training set. Ultimately, I see LLMs as a way to legally copy the code of other coders more than anything else

show 1 reply
serial_devyesterday at 5:39 PM

Here’s how I would do this task with cursor, especially if there are more routes.

I would open a chat and refactor the template together with cursor: I would tell it what I want and if I don’t like something, I would help it to understand what I like and why. Do this for one route and when you are ready, ask cursor to write a rules file based on the current chat that includes the examples that you wanted to change and some rationale as to why you wanted it that way.

Then in the next route, you can basically just say refactor and that’s it. Whenever you find something that you don’t like, tell it and remind cursor to also update the rules file.

show 1 reply
justatdotinyesterday at 8:08 PM

what really got me moving was dusting off some old text about cognitive styles and team work. Learning to treat agents like a new team-member with extreme tendencies. Learning to observe both my practices and the agents' in order to understand one another's strengths and weaknesses, indicating how we might work better together.

I think this perspective also goes a long way to understanding the very different results different devs get from these tools.

my main approach to quality is to focus agent power on all that code which I do not care about the beauty of: problems with verifiable solutions, experiments, disposable computation. eg my current projects are build/deploy tools, and I need sample projects to build/deploy. I never even reviewed the sample projects' code: so long as they hit the points we are testing.

svelte does not really resonate with me, so I don't know it well, but I suspect there should be good opportunities for TDD in this rewrite. not the project unit tests, just disposable test scripts that guide and constrain new dev work.

you are right to notice that it is not working for you, and at this stage sometimes the correct way to get in sync with the agents is to start again, without previous missteps to poison the workspace. There's good advice in this thread, you might like to experiment with good advice on a clean slate.

show 1 reply
dboonyesterday at 9:01 PM

AI programming, for me, is just a few simple rules:

1. True vibe coding (one-shot, non-trivial, push to master) does not work. Do not try it.

2. Break your task into verifiable chunks. Work with Claude to this end.

3. Put the entire plan into a Markdown file; it should be as concise as possible. You need a summary of the task; individual problems to solve; references to files and symbols in the source code; a work list, separated by verification points. Seriously, less is more.

4. Then, just loop: Start a new session. Ask it to implement the next phase. Read the code, ask for tweaks. Commit when you're happy.

Seriously, that's it. Anything more than that is roleplaying. Anything less is not engineering. Keep a list in the Markdown file of amendments; if it keeps messing the same thing up, add one line to the list.

To hammer home the most important pieces:

- Less is more. LLMs are at their best with a fresh context window. Keep one file. Something between 500 and 750 words (checking a recent one, I have 555 words / 4276 characters). If that's not sufficient, the task is too big.

- Verifiable chunks. It must be verifiable. There is no other way. It could be unit tests; print statements; a tmux session. But it must be verifiable.

show 3 replies
nextaccounticyesterday at 10:48 PM

About Svelte, on the svelte subreddit it was reported that GPT 5.2 is better at Svelte, perhaps because it has a more recent knowledge cutoff

But anyway you should set up the Svelte MCP

show 1 reply
rdrdyesterday at 4:00 PM

First you have to be very specific with what you mean by idiomatic code - what’s idiomatic for you is not idiomatic for an LLM. Personally I would approach it like this:

1) Thoroughly define step-by-step what you deem to be the code convention/style you want to adhere to and steps on how you (it) should approach the task. Do not reference entire files like “produce it like this file”, it’s too broad. The document should include simple small examples of “Good” and “Bad” idiomatic code as you deem it. The smaller the initial step-by-step guide and code conventions the better, context is king with LLMs and you need to give it just enough context to work with but not enough it causes confusion.

2) Feed it to Opus 4.5 in planning mode and ask it to follow up with any questions or gaps and have it produce a final implementation plan.md. Review this, tweak it, remove any fluff and get it down to bare bones.

3) Run the plan.md through a fresh Agentic session and see what the output is like. Where it’s not quite correct add those clarifications and guardrails into the original plan.md and go again with step 3.

What I absolutely would NOT do is ask for fixes or changes if it does not one-shot it after the first go. I would revise plan.md to get it into a state where it gets you 99% of the way there in the first go and just do final cleanup by hand. You will bang your head against the wall attempting to guide it like you would a junior developer (at least for something like this).

show 1 reply
bikeshavingyesterday at 9:32 PM

You know when Claude Code for Terminal starts scroll-looping and doom-scrolling through the entire conversation in an uninterruptible fashion? Just try reading as much as of it as you can. It strengthens your ability to read code in an instant and keeps you alert. And if people watch you pretend to understand your screen, it makes you look like a mentat.

It’s actually a feature, not a bug.

show 1 reply
simonwyesterday at 10:58 PM

For this particular project I suggest manually prying one section of the app, committing that change, and then telling Claude "look at commit HASH first, now port feature X in the same style".

jdelsmanyesterday at 9:06 PM

My favorite set of tools to use with Claude Code right now: https://github.com/obra/superpowers

1. Start with the ‘brainstorm’ session where you explain your feature or the task that you're trying to complete. 2. Allow it to write up a design doc, then an implementation plan - both saved to disk - by asking you multiple clarifying questions. Feel free to use voice transcription for this because it is probably as good as typing, if not better. 3. Open up a new Claude window and then use a git worktree with the Execute Plan command. This will essentially build out in multiple steps, committing after about three tasks. What I like to do is to have it review its work after three tasks as well so that you get easier code review and have a little bit more confidence that it's doing what you want it to do.

Overall, this hasn't really failed me yet and I've been using it now for two weeks and I've used about, I don't know, somewhere in the range of 10 million tokens this week alone.

whatever1yesterday at 9:45 PM

For me what vastly improved the usefulness when working with big json responses was to install jq in my system and tell the llm to use jq to explore the json, instead of just trying to ingest it all together. For other things I explicitly ask it to write a script to achieve something instead of doing it directly.

show 1 reply
vaibhavgeekyesterday at 7:06 PM

This may sound strange but here is how I define my flow.

1. Switch off your computer.

2. Go to a nice Park.

3. Open notebook and pen, and write prompts that are 6-8 lines long on what task you want to achieve, use phone to google specific libraries.

4. Come back to your PC, type those prompts in with Plan mode and ask for exact code changes claude is going to make.

5. Review and push PR.

6. Wait for your job to be automated.

firefaxyesterday at 5:50 PM

How did you learn how to use AI for coding? I'm open to the idea that a lot of "software carpentry" tasks (moving/renaming files, basic data analysis, etc) can be done with AI to free up time for higher level analysis, but I have no idea where to begin -- my focus many years ago was privacy, so I lean towards doing everything locally or hosted on a server I control so I lack a lot of knowledge of "the cloud" my HN betheren have.

show 3 replies
bulletsvshumansyesterday at 8:13 PM

Try specification-driven-development with something like speckit [0]. It helps tremendously for facilitating a process around gathering requirements, doing research, planning, breaking into tasks, and finally implementing. Much better than having a coding agent just go straight to coding.

[0] - https://github.com/github/spec-kit

cardanomeyesterday at 8:20 PM

Honestly if your boss does not force you to use AI, don't.

Don't feel like you might get "left behind". LLM assisted development is still changing rapidly. What was best practice 6 months ago is irrelevant today. By being an early adopter you will just learn useless workarounds that might soon not be necessary to know.

On the other hand if you keep coding "by hand" will keep your skills sharp. You will protect yourself against the negative mental effects of using LLMs like skill decline, general decline of mental capacity, danger of developing psychosis because of the sycophantic nature of LLMs and so on.

LLM based coding tools are only getting easier to use and if you actually know how to code and know software architecture you will able to easily integrate LLM based workflows and deliver far superior results compared to someone who spend their years vibe coding, even if you picked up Claude Code or whatever just a month ago. No need for FOMO,

show 1 reply
sdn90yesterday at 8:24 PM

Go into planning mode and plan the overall refactor. Try to break the tasks down into things that you think will fit into a single context window.

For mid sized tasks and up, architecture absolutely has to be done up front in planning mode. You can ask it questions like "what are some alternatives?", "which approach is better?".

If it's producing spaghetti code, can you explain exactly what it's doing wrong? If you have an idea of what ideal solution should look like, it's not too difficult to guide the LLM to it.

In your prompt files, include bad and good examples. I have prompt files for API/interface design, comment writing, testing, etc. Some topics I split into multiple files like criteria for testing, testing conventions.

I've found the prompts where they go "you are a X engineer specializing in Y" don't really do much. You have to break things down into concrete instructions.

__mharrison__yesterday at 8:12 PM

I have a whole workflow for coding with agents.

Get very good at context management (updating AGENTS.md, starting new session, etc).

Embrace TDD. It might have been annoying when Extreme Programming came out 25 years ago, but now that agents can type a lot faster than us, it's an awesome tool for putting guardrails around the agent.

(I teach workshops on best practices for agentic coding)

nmaleyyesterday at 8:40 PM

I use Claude. It's really good, but you should try to use it as Boris suggests. The other thing I do is give it very careful and precisely worded specs for what you want it to do. I have the habit, born from long experience, of never assuming that junior programmers will know what you want the program to do unless you make it explicit. Claude is the same. LLM code generators are terrific, but they can't second guess unclear communication.

Using carefully written specs, I've found Claude will produce flawless code for quite complex problems. It's magic.

asgrahamyesterday at 9:30 PM

Lots of good suggestions. However for Svelte in particular I’ve had a lot of trouble. You can get good results as long as you don’t care about runes and Svelte 5. It’s too new, and there’s too much good Svelte code out there used in training that doesn’t use Svelte 5. If you want AI generated Svelte code, restricting yourself to <5 is going to improve your results.

(YMMV: this was my experience as of three or four months ago)

daxfohlyesterday at 6:37 PM

Go slowly. Shoot for a 10% efficiency improvement, not 10x. Go through things as thoroughly as if writing by hand, and don't sacrifice quality for speed. Be aware of when it's confidently taking you down a convoluted path and confidently making up reasons to do so. Always have your skeptic hat on. If something seems off, it probably is. When in doubt, exit the session and start over.

I still find chat interface generally more useful than coding assistant. It allows you to think and discuss higher level about architecture and ideas before jumping into implementation. The feedback loop is way faster because it is higher level and it doesn't have to run through your source tree to answer a question. You can have a high ROI discussion of ideas, architecture,algorithms, and code, before committing to anything. I still do most of my work copying and pasting from the chat interface.

Agents are nice when you have a very specific idea in mind, but I'm not yet hugely fond of them otherwise. IME the feedback loop is too long, they often do things badly, and they are overly confident in their oytput, encouraging cursory reviews and commits of hacked-together work. Sometimes I'll give it an ambitious task just in the off chance that it'll succeed, but with the understanding that if it doesn't get it right the first time, I'll either throw it away completely, or just keep whatever pieces it got right and pitch the rest; it almost never gets it right the second time if it's already started on an ugly approach.

But the main thing is to start small. Beyond one-shotting prototypes, don't expect it to change everything overnight. Focus on the little improvements, don't skip design, and don't sacrifice quality! Over time, these things will add up, and the tools will get better too. A 10% improvement every month gets to be a 10x improvement in (math...). And you'll be a lot better positioned than those who tried to jump onto the 10x train too fast because you'll not have skipped any steps.

show 1 reply
rokoss21yesterday at 9:12 PM

The key insight most people miss: AI isn't a code generator, it's a thinking partner. Start by defining the problem precisely in plain English before asking it to code. Use it for refactoring and explaining existing code rather than generating from scratch. That's where you get the 10x gains.

Also, treat bad AI suggestions as learning opportunities - understand why the code is wrong and what it misunderstood about your requirements.

mirsadmyesterday at 5:10 PM

I break everything down into very small tasks. Always ask it to plan how it will do it. Make sure to review the plan and spot mistakes. Then only ask it to do one step at a time so you can control the whole process. This workflow works well enough as long as you're not trying to do anything too interesting. Anything which is even a little bit unique it fails to do very well.

show 1 reply
realberkeaslanyesterday at 3:54 PM

Consider giving Cursor a try. I personally like the entire UI/UX, their agent has good context, and the entire experience overall is just great. The team has done a phenomenal job. Your workflow could look something like this:

1. Prompt the agent

2. The agent gets too work

3. Review the changes

4. Repeat

This can speed up your process significantly, and the UI clearly shows the changes + some other cool features

EDIT: from reading your post again, I think you could benefit primarily from a clear UI with the adjusted code, which Cursor does very well.

show 3 replies
caseywyesterday at 6:31 PM

The approach I’ve been taking lately with general AI development:

1. Define the work.

2. When working in a legacy code base provide good examples of where we want to go with the migration and the expectation of the outcome.

3. Tell it about what support tools you have, lint, build, tests, etc.

4. Select a very specific scenario to modify first and have it write tests for the scenario.

5. Manually read and tweak the tests, ensure they’re testing what you want, and they cover all you require. The tests help guardrail the actual code changes.

6. Depending upon how full the context is, I may create a new chat and then pull in the test, the defined work, and any related files and ask it to implement based upon the data provided.

This general approach has worked well for most situations so far. I’m positive it could be improved so any suggestions are welcome.

johnsmith1840yesterday at 8:29 PM

A largely undiscussed part of AI use in code is that it's actually neither easy nor intuitive to learn max effectiveness of your AI output.

I think there's a lot of value in using AIs that are dumb to learn what they fail at. The methods I learned using gpt3.5 for daily work still transaltes over to the most modern of AI work. It's easy to understand what makes AI fail on a function or two than understanding that across entire projects.

My main tips:

1. More input == lower quality

Simply put, the more you can focus your input data to output results the higher quality you will get.

For example on very difficult problems I will not only remove all comments but I will also remove all unrelated code and manually insert it for maximum focus.

Another way to describe this is compute over problem space. You are capped in compute so you must control your problem space.

2. AI output is a reflection of input tokens and therefore yourself.

If you don't know what you're doing in a project or are mentally "lazy" AI will fail with death by a thousand cuts. The absolute best use of AI is knowing EXACTLY what you want and describing it in as few words as possible. I directly notice if I feel lazy or tired in a day and rely heavily on the model I will often have to revert entire days of work due to terrible design.

3. Every bad step of results from an AI or your own design compound problems as you continue.

It's very difficult to know the limits of current AI methods. You should not be afraid of reverting and removing large amounts of work. If you find it failing heavily repeatedly this is a good sign your design is bad or asking too much from it. Continuing on that path reduces quality. You could end up in the circular debugging loops with every fix or update adds even more problems. It's far better practice to drop the entire feature of updates and restart with smaller step by step actions.

4. Trust AI output like you would stack overflow response or a medium article.

Maybe its output would work in some way but it has a good chance of not working for you. Repeatedly asking same questions differently or different angles is very helpful. The same way debugging via stack overflow was trying multiple suggestions to discover the best real problem.

show 1 reply
twodaveyesterday at 9:13 PM

I’ve been doing a rewrite of some file import type stuff, using a new common data model for storage, and I’ve taken to basically pasting in the old code, commented out and telling it to fill the new object using the commented out content as a guide. This probably got me 80% of the way? Not perfect, but I don’t think anything really is.

PostOnceyesterday at 11:50 PM

If anyone knew the answer to this question, Anthropic would be profitable.

Currently they project they might break even in 2028.

That means that right now, every time you ask an AI a question, someone loses money.

That of course means no-one knows if you can get better at AI programming, and the answer may be "you can't."

Only time will tell.

show 1 reply
rr808yesterday at 9:20 PM

Its super frustrating there is no official guide. I hear lots of suggestions all the time and who knows if they help or not. The best one recently is tell the LLM to "act like a senior dev", surely that is expected by default? Crazy times.

show 1 reply
Fire-Dragon-DoLyesterday at 6:56 PM

I find all AI code to be lower quality than humans who care about quality. This might be ok, I think the assumpt with AI is that we don't need to look at code so that it looks beautiful because AI will look at it .

show 2 replies
benzguoyesterday at 8:26 PM

Planning! I actually prefer DIY planning prompt + docs, not planning mode. Wrote this article about it today actually: https://0thernet.substack.com/p/velocity-coding

coryvirokyesterday at 5:44 PM

The hack for sveltekit specifically, is to first have Claude translate the existing code into a next.js route with react components. Run it, debug and tweak it. Then have Claude translate the next.js and react components into sveltekit/svelte. Try and keep it in a single file for as long as possible and only split it out once it's working.

I've had very good results with Claude Code using this workflow.

michelsedghyesterday at 8:02 PM

I think you shouldn't think so much about it, the more you use it, the better you will understand how it can help you. The most gain will be coming from the models jumping and how you get updated using the best for your use case.

hurturueyesterday at 5:54 PM

I did a similar thing.

put an example in the prompt: this was the original Django file and this is the rewritten in SvelteKit version.

the ask it to convert another file using the example as a template.

you will need to add additional rules for stuff not covered by the example, after 2-3 conversions you'll have the most important rules.

or maybe fix a bad try of the agent and add it as a second example

nisalperiyesterday at 6:16 PM

I wrote about my experience from the last year. Hope you find this helpful

https://open.substack.com/pub/sleuthdiaries/p/guide-to-effec...

8cvor6j844qw_d6yesterday at 5:09 PM

I find Claude Code works best when given a highly specific and scoped tasks. Even then sometimes you'll need to course correct it once you notice its going off course.

Basically a good multiplier, and an assistant for mudane task, but not a replacement. Still requires the user to have good understanding about the codebase.

Writing summary changes for commit logs is amazing however, if you're required to.

Galoriousyesterday at 8:12 PM

Did you use the /init command in Claude Code at the start?

That builds the main claude.md file. If you don’t have that file CC starts each new session completely oblivious to your project like a blank slate.

noivyesterday at 6:29 PM

I learned the hard way, when Claude has 2 conflicting information in Claude.md it tends to ignore both. So, precise language is key, don't use terms like 'object', which may have different meanings in different fields.

bpavukyesterday at 11:59 PM

first off, drop the idea of coding "agents" entirely. semi-async death valley is not worth it, you will never get into the flow state with an "agent" that takes less than an hour to spin, and we did not learn how to make true async agents that run for this long while maintaining coherence yet. OpenAI is the closest in that regard, but they are still at a 20-minute mark, so I am not dropping the quotes for now.

another argument against letting LLM do the bulk of the job is that they output code that's already legacy, and you want to avoid tech debt. for example, Gemini still thinks that Kotlin 2.2 is not out, hence misses out on context parameters and latest Swift interoperability goodies. you, a human being, are the only one who will ever have the privilege of learning "at test time", without separate training process.

replace coding "agents" with search tools. they are still non-deterministic, but hey, both Perplexity and Google AI Mode are good at quick lookup of SvelteKit idioms and whatnot. plus, good old Lighthouse can point out a11y issues - most of them stem from non-semantic HTML. but if you really want to do it without leaving a terminal, I can recommend Gemini CLI with some search-specific prompting. it's the only CLI "agent" that has access to the web search to my knowledge. it's slower than Perplexity or even ChatGPT Search, but you can attach anything as a context.

this is the true skill of "how to use AI" - only use it where it's worth it. and let's be real, if Google Search was not filled with SEO crap, we would not need LLMs.

KronisLVtoday at 12:13 AM

Automated code checks. Either custom linter rules (like ESLint) or prebuild scripts to enforce whatever architectural or style rules you want, basically all of the stuff that you'd normally flag in code review that can be codified into an automatic check but hasn't been before due to developers either not finding it worth their time to do it, or not having enough time or skill to do that - use the AI to write as many of these as needed, just like:

  node prebuild/prebuild.cjs
which will then run all the other checks you've defined like:

  prebuild/ensure-router-routes-reference-views-not-regular-components.cjs
  prebuild/ensure-custom-components-used-instead-of-plain-html.cjs
  prebuild/ensure-branded-colors-used-instead-of-tailwind-ones.cjs
  prebuild/ensure-eslint-disable-rules-have-explanations.cjs
  prebuild/ensure-no-unused-translation-strings-present.cjs
  prebuild/ensure-pinia-stores-use-setup-store-format.cjs
  prebuild/ensure-resources-only-called-in-pinia-stores.cjs
  prebuild/ensure-api-client-only-imported-in-resource-files.cjs
  prebuild/ensure-component-import-name-matches-filename.cjs
  prebuild/disallow-deep-component-nesting.cjs
  prebuild/disallow-long-source-files.cjs
  prebuild/disallow-todo-comments-without-jira-issue.cjs
  ...
and so on. You might have tens of these over the years of working on a project, plus you can write them for most things that you'd conceivably want in "good code". Examples above are closer to a Vue codebase but the same principles apply to most other types of projects out there - many of those would already be served by something like ESLint (you probably want the recommended preset for whatever ruleset exists for the stack you work with), some you'll definitely want to write yourself. And that is useful regardless of whether you even use AI or not, so that by the time code is seen by the person doing the review, hopefully all of those checks already pass.

If "good code" is far too nebulous of a term to codify like that, then you have a way different and frankly more complex problem on your hands. If there is stuff that the AI constantly gets wrong, you can use CLAUDE.md as suggested elsewhere or even better - add prebuild script rules specifically for it.

Also, a tech stack with typing helps a bunch - making wrong code harder to even compile/deploy. Like, with TypeScript you get npm run type-check (tsc) and that's frankly lovely to be able to do, before you even start thinking about test coverage. Ofc you still should have tests that check the functionality of what you've made too, as usual.

helterskelteryesterday at 5:36 PM

I like to followup with "Does this make sense?" or similar. This gets it to restate the problem in its own words, which not only shows you what its understanding of the problem is, it also seems to help reinforce the prompt.

robertpiosikyesterday at 10:09 PM

Try a free and open-source VS Code plugin "Code Web Chat".

owlninjayesterday at 5:50 PM

Would love to hear any feedback using Google's anitgravity from a clean slate. Holiday shutdown is about to start at my job and I want to tinker with something that I have not even started.

BobbyTables2yesterday at 11:26 PM

Don’t!

show 1 reply
3videncetoday at 12:18 AM

This isn't exactly an answer to your question but I've experienced some efficiency gains in using AI agents for pre-reviewing my PRs and getting it to create tests.

You still get to maintain the core code and maintain understandability but it helps with the tasks the take time that aren't super interesting.

daxfohlyesterday at 8:41 PM

For your task, instead of a direct translation, try adding a "distillation" step in between. Have it take the raw format and distill the important parts to yaml or whatever, then take the distillation and translate that into the new format. That way you can muck with the yaml by hand before translating it back, which should make it easier to keep the intent without the spaghetti getting in the way. Then you can hand-wire any "complexities" into the resulting new code by hand, avoiding the slop it would more likely create.

It may even be worth having it write a parser/evaluator that does these steps in a deterministic fashion. Probably won't work, but maybe worth a shot. So long as it does each translation as a separate step, maybe at least one of them will end up working well enough, and that'll be a huge time saver for that particular task.

orwinyesterday at 6:44 PM

I want to say a lot of mean things, because an extremely shitty, useless, clearly Claude-generated test suite passed the team PR review this week, tests were useless, so useless the code they were linked to (can't say if the code itself was Ai-written though) had a race condition, that, if triggered and used correctly, could probably rewrite the last entry of any of the firewall we manage (DENY ALL is the one I'm afraid about).

But I can't even shit on Claude AI, because I used it to rewrite part of the tests, and analyse the solution to fix the race condition (and how to test it).

It's a good tool, but in the last few weeks I've been more and more mad about it.

Anyway. I use it to generate a shell. No logic inside, just data models, and functions prototypes. That help with my inability to start something new. Then I use it to write easy functions. Helpers I know I'll need. Then I try to tie everything together. I never hesitate to stop Claude and write specific stuff myself, add a new prototype/function, or delete code. I restart the context often (Opus is less bad about it, but still). Then I ask it about easy refactoring or library that would simplify the code. Ask for multiple solutions each time.

sisciayesterday at 8:49 PM

I will be crucified by this, but I think you are doing it wrong.

I would split it in 2 steps.

First, just move it to svelte, maintain the same functionality and ideally wrap it into some tests. As mentioned you want something that can be used as pass/no-pass filter. As in yes, the code did not change the functionality.

Then, apply another pass from Svelte bad quality to Svelte good quality. Here the trick is that "good quality" is quite different and subjective. I found the models not quite able to grasp what "good quality" means in a codebase.

For the second pass, ideally you would feed an example of good modules in your codebase to follow and a description of what you think it is important.

🔗 View 14 more comments