logoalt Hacker News

Ask HN: Why is my Claude experience so bad? What am I doing wrong?

46 pointsby moomoo11last Friday at 8:09 AM70 commentsview on HN

I stopped my CC Max plan a few months ago, but I'm trying it again for fun after seeing their $30 billion series G or whatever.

It just doesn't work. I'm trying to build a simple tool that will let me visualize grid layouts.

It needs to toggle between landscape/portrait, and implement some design strategies so I can see different visualizations of the grid. I asked it to give me a slider to simulate the number of grids.

1st pass, it made something, but it was squished. And toggling between landscape and portrait made it so it squished itself the other way so I couldn't even see anything.

2nd pass, syntax error.

3rd try I ask it to redo everything from scratch. It now has a working slider, but the landscape/portrait is still broken.

4th try, it manages to fix the landscape/portrait issue, but now the issue is that the controls are behind the display so I have to reload the page.

5th try, it manages to fix this issue, but now it is squished again.

6th try, I ask it to try again from scratch. This time it gives me a syntax error.

This is so frustrating.


Comments

kpilyesterday at 10:17 PM

The truth is that there is a lot of hype.

You need to be reasonably experienced and guide it.

First, you need to know that Claude will create nonsensical code. On a macro level it's not exactly smart it just has a lot of contextual static knowledge.

Debugging is not it's strongest skill. Most models don't do good at all. Opus is able to one-shot "troubleshooting" prompts occasionally, but it's a high probability that it veer of on a tangent if you just tell it to "fix things" based on errors or descriptions. You need to have an idea what you want fixed.

Another problem is that it can create very convincing looking - but stupid - code. If you can't guide it, that's almost guaranteed. It can create code that's totally backwards and overly complicated.

If it IS going on a wrong tangent, it's often hopeless to get it back on track. The conversation and context might be polluted. Restart and reframe the prompt and the problems at hand and try again.

I'm not totally sure about the language you are using, but syntax errors typically happens if it "forgets" to update some of the code, and very seldom just in a single file or edit.

I like to create a design.md and think a bit on my own, or maybe prompt to create it with a high level problem to get going, and make sure it's in the context (and mentioned in the prompts)

Leftiumlast Friday at 10:06 AM

It seems to me you expect Claude to be able to one-shot your tool based on a single prompt. Potentially "vibe-coding" as in the sense: you don't know how to develop this yourself (perhaps you are not a software developer?)

While this may be possible, it likely requires a very detailed prompt and/or spec document.

---

Here is an example of something I successfully built with Claude: https://rift-transcription.vercel.app

Apparently I have had over 150 chat sessions related to the research and development of this tool.

- First, we wrote a spec together: https://github.com/Leftium/rift-transcription/blob/main/spec...

- The spec broke down development into major phases. I reviewed detailed plans for each phase before Claude started. I often asked Claude to update these detailed plans before starting. And after implementation, I often had to have Claude fix bugs in the implementation.

- I tried to share the chat session where Claude got the first functional MVP working: https://opncd.ai/share/fXsPn1t1 (unfortunately the shared session is truncated)

---

"AI mistakes you're probably making": https://youtu.be/Jcuig8vhmx4

I think the most relevant point is: AI is best for accelerating development tasks you could do on your own; not new tasks you don't know how to do.

---

Finally: Cloudlflare builds OAuth with Claude and publishes all the prompts: https://hw.leftium.com/#/item/44159166

show 4 replies
cladopayesterday at 10:15 PM

What you are trying to do is quite easy to do with Claude. I have done way more complex things than that in hours. But having programming, managing(with humans) and engineering experience is extremely useful.

It seems you try to tell the tool to do everything in one shot. That is a very wrong approach, not just with Claude but with everything(you ask a woman for a date and if you do not get laid in five minutes you failed?). When I program something manually and compiles, I expect it to be wrong. You have to iron it and debug it.

Instead of that:

1.Divide the work in independent units. I call this "steps"

2.Subdivide steps into "subsets" You work in an isolated manner on those subsets.

3.Use an inmediate gui interface like dear imgui to prototype your tool. Translating then into using something else once it works is quite easy of LLMs.

4.Visualize everything. You do not need to see the code but you need to visualise every single thing you ask it to do.

5.Tell Claude what you want and why you want it and update the documentation constantly.

6. Use git in order to make rock solid steps that Claude will not touch when it works and you can revert changes or ask the ia to explore a branch, explaining how you did something and want something similar.

7. Do not modify code that already works rock solid. Copy it into another step leaving the step as reference and modify it there.

5.Use logs. Lots of logs. For every step you create text logs and you debug the problems giving Claude the logs to read them.

6.Use screenshots. Claude can read screenshots. If you visualise everything, clause can see the errors too.

7.Use asserts, lots of asserts, just like with manual programming.

It is not that different from managing a real team of people...

show 5 replies
chrismarlow9today at 1:38 AM

Not nearly enough context to be a front page post but here we are. Does everyone just give up on all fundamental expectations including determinism when they see AI in the title? Is the AI final boss to take over the front page of hacker news? /rant

To answer the question I would highlight the wrong regions in neon green manually via code. Now feed the code (zipped if necessary) to the AI along with a screenshot. Now give it relatable references for the code and say "xxxx css class/gtk code/whatever is highlighted in the screenshot in neon. I expect it to be larger but it's not, why?"

delaminatorlast Friday at 9:06 AM

You aren't telling us anything about how you're using it. So how can we tell you what you're doing wrong? You're just reporting what happened.

You haven't even said what programming language you're trying to use, or even what platform.

It sounds to me like you didn't do much planning, you just gave it a prompt to build away.

My preferred method of building things, and I've built a lot of things using Claude, is to have a discussion with it in the chatbot. The back and forth of exploring the idea gives you a more solid idea of what you're looking for. Once we've established the idea I get it to write a spec and a plan.

I have this as an instruction in my profile.

> When we're discussing a coding project, don't produce code unless asked to. We discuss projects here, Claude Code does the actual coding. When we're ready, put all the documents in a zip file for easy transfer (downloading files one at a time and uploading them is not fun on a phone). Include a CONTENTS.md describing the contents and where to start.

So I'll give you this one as an example. It's a Qwen driven System monitor.

https://github.com/lawless-m/Marvinous

here are the documents generated in chat before trying to build anything

https://github.com/lawless-m/Marvinous/tree/master/ai-monito...

At this point I can usually say "The instructions are in the zip, read the contents and make a start." and the first pass mostly works.

laurextoday at 12:37 AM

While I have had some good experiences with CC, I do use at least double the tokens and probably more like 5x going through fixes / debugging from its initial efforts. I don't think this is always bad, because it helps me to understand some of the more complicated interactions of existing and new code and improves documentation, but it's irritating when it runs out of usage allotments when it has broken something. There are some small things it never has managed to fix that I have to figure out myself, but again, I learn from that. Mapping out a data structure in advance and creating a plan before immediately coding can also help, but at least in our project, sometimes it just takes an incorrect approach and so I don't just let it go off and do things willy-nilly. I can't at all imaging having an agent free to maintain the code at this point, despite the past 2 weeks' hype cycles.

jgiliasyesterday at 10:19 PM

What you’re doing is the so called “slot machine AI”, where you put some tokens in, pray, and hope to get what you want out. It doesn’t work that way (not well, at least)

The LLM under the hood is essentially a very fancy autocomplete. This always needs to be kept in mind when working with these tools. So you have to focus a lot on what the source text is that’s going to be used to produce the completion. The better the source text, the better the completion. In other words, you need to make sure you progressively fill the context window with stuff that matters for the task that you’re doing.

In particular, first explore the problem space with the tool (iterate), then use the exploration results to plan what needs doing (iterate), when the plan looks good and makes sense, only then you ask to actually implement.

Claude’s built in planning mode kind of does this, but in my opinion it sucks. It doesn’t make iterating on the exploration and the plan easy or natural. So I suggest just setting up some custom prompts (skills) for this with instructions that make sense for the particular domain/use case, and use those in the normal mode.

show 1 reply
ckdotyesterday at 10:28 PM

1. Make sure you are using Opus model. Type /model and make sure Opus is selected. While many say sonnet is good, too, I’m not too convinced. Opus is the first model that actually convinced me to use AI as my daily driver - and I’m a developer for about 20 years. 2. Make the tasks as small and specific as possible. Don’t prompt „create a todo app with user login“ but „create a vue app where users can register, don’t do more than that“, then „build a user login“ then, „create a page to create todo items“, then „create a page to list todo items“, then „on list page, add delete functionality“ - and so on, you get the idea. 3. beware the context size. Claude code will warn you if you exceed it, but even before: the higher the context window, the higher AI will miss things. If you start a new prompt that doesn’t require the whole context of the previous one, type /clear. 4. build an agents.md or Claude.md. /init will do that for you, but it will just create a Claude.md with information that it might think are important - but easily miss things. You know best. It often also includes file and directory structure, while it could easily find out again (tree command) without that info in agents/claude file. Still I recommend: let Claude create that file, then adjust it to your needs. Only add important stuff here. The more you add, the more you spam the context. Again, try to keep context small. 5. if Claude needs a long time for finishing a task or did it wrong at first attempt, tell it to update the Claude.md with information to not do the same mistakes the next time again. 6. make sure you understand the code it created. Add conventions to agents.md that will make the code more readable (use early returns, don‘t exceed nesting level of 3, create new methods with meaningful names instead of inline comments etc.)

Good luck!

simonwyesterday at 10:04 PM

Show us your prompts.

Two questions:

1. How are you using Claude? Are you using https://claude.ai and copying and pasting things back and forth, or are you running one of the variants of Claude Code? If so, which one?

2. If you're running Claude Code have you put anything in place to ensure it can test the code it's writing, including accessing screenshots of what's going on?

Smaug123yesterday at 10:01 PM

Can you give some history of what you did? We can't answer "what am I doing wrong?" if you don't tell us… what you did.

verdvermlast Friday at 8:29 AM

It takes many months to figure this out, much longer than learning a new programming language.

Read through anthropics knowledge share, check out their system prompts extracted on github, write more words in AGENTS/CLAUDE.md, you need to give them some warmup to do better at tasks.

What model are you using? Size matters and Gemini is far better at UI design work. At the same time, pairing gemini-3-flash with claude-code derived prompts makes it nearly as good as Pro

Words matter, the way you phrase something can have disproportionate effect. They are fragile at times, yet surprisingly resilient at others. They will deeply frustrate you and amaze you on a daily basis. The key is to get better at recognizing this earlier and adjusting

You can find many more anecdotes and recommendations by looking through HN stories and social media (Bluesky has a growing Ai crowd, coming over from X, good community bump recently, there are an anti-ai labelers/block lists to keep the flak down)

domjmichtoday at 1:30 AM

It was a bit of a learning curve for me as well. I've found having programming experience comes in handy. I know a lot of non-technical folks (people who don't know how to program) who keep bumping their heads on these tools; crunching through credits when a simple update to the code/push to repo is all that's needed.

oceanplexianyesterday at 11:58 PM

This is going to sound crazy but I felt it was super degraded this morning.

CC was slow and the results I was getting were subpar having it debug some easy systems tasks. Later in the afternoon it recovered and was able to complete all my tasks. There’s another aspect to these coding agents: the providers can randomly quantize (lobotomize) models based on their capacity, so the model you’re getting may not be the one someone else is getting, or the same model you used yesterday.

awesomeusernameyesterday at 11:34 PM

I listened to a conversation between two superstar developers in their 50's, who have been coding for more than most readers here have been alive, about their experience with Claude Code.

I wanted to tear my ears out.

What is crystal clear to me now is using LLMs to develop is a learned and practiced skill. If you expect to just drop in and be productive on day one, forget it. The smartest guy I know _who has a PhD in AI_, is hopeless at using it.

Practice practice practice. It's a tool, it takes practice. Learn on hobby projects before using it at work.

show 1 reply
oatmealsnapyesterday at 10:30 PM

There are skills available that might help you out. The “superpowers” set from Anthropic is really impressive.

The idea is, you want to build up the right context before starting development. I will either describe exactly what I want to build, or I ask the agent for guidance on different approaches. Sometimes I’ll even do this in a separate Claude (not Claude Code) conversation, which I feel works a bit faster. Once we have an approach, I will ask it to create an implementation plan in a markdown file, I clear context and then tell it to implement the plan.

Check out the “brainstorming” skill and the “git worktrees” skill. They will usually trigger the planning -> implementation workflow when the work is complex enough.

show 2 replies
yodonyesterday at 11:52 PM

When claude or codex does something other than what you want, instead of getting mad at it, ask it what it saw in your prompt that led it to do what it did, and how should you have prompted it to achieve what you wanted. This process tends to work very well and gives you the tools you need to learn how to prompt it to achieve the results you want.

chasd00yesterday at 10:37 PM

Get the superpowers plugin and then ask Claude to design and document the system. It will go into brainstorming mode and ask you a lot of questions. The end result will be a markdown file. Then get another agent (maybe ChatGPT) to critique and improve the design (upload the markdown file in the web version). Then give it back to Claude and have it critique and improve. Last step, make Claude analyze the design and then document a step by step implementation guide. After that turn Claude code loose on implementation. Those techniques have been working for me when doing a project from scratch.

zmjyesterday at 9:53 PM

Try this:

* have Claude produce wireframes of the screens you want. Iterate on those and save them as images.

* then develop. Make sure Claude has the ability to run the app, interact with controls, and take screenshots.

* loop autonomously until the app looks like the wireframes.

Feedback loops are required. Only very simple problems get one-shot.

show 2 replies
wewewedxfgdfyesterday at 9:44 PM

Claude is a programming assistant not a programmer.

You still need knowledge of what you are building so you can drive it, guide it, fix things.

This is the core of the question about LLM assisted programming - what happens when non programmers use it?

heytsyesterday at 10:06 PM

I’m probably going to be downvoted for this but this thread doesn’t really reflect well on the promises of Generative AI and particularly the constantly reiterated assurance that we’re on the verge of a new industrial Revolution.

show 4 replies
tombotyesterday at 10:14 PM

Have you read the best practices? https://code.claude.com/docs/en/best-practices Are you using plan mode?

show 1 reply
ecesenalast Saturday at 8:51 PM

Try a prompt that helps claude iterate until it can verify the result.

For example, if you tell it to compile and run tests, you should never be in a situation with syntax errors.

But if you don’t give a prompt that allows to validate the result, then it’s going to get you whatever.

fordyesterday at 9:40 PM

I've found it really valuable to pair with people, sit at a computer together while they're driving and using AI. It's really interesting to see how other people prompt & use AI to explore the problem.

8noteyesterday at 10:11 PM

claude-code added the /insights command which might tell you what you are doing wrong, using your history.

from the basics, did you actually tell it that you want those things? its not a mind reader. did you use plan mode? did you ask it to describe what its going to make?

tennisflyiyesterday at 10:43 PM

Sounds right. Any one shot anything is cap

esaymyesterday at 9:49 PM

Add https://github.com/obra/superpowers

and then try again.

show 1 reply
tecoholicyesterday at 11:14 PM

Typical flow for a greenfield project for me is:

First prompt, ask it to come with a plan, break it down to steps and save it to a file.

Edit file as needed.

Launch CC again, use the plan file to implement stage by stage, verify and correct. No technical debugging needed. Just saying X is supposed to be like this, but it’s actually like that goes a long way.

13415yesterday at 10:25 PM

That matches my Claude experience.

LeoPantherayesterday at 10:14 PM

Are you using plan mode?

tsssyesterday at 10:36 PM

The fact that you got a syntax error at all is pretty telling. Are you not using agent mode? Or maybe that's just the experience with inferior non-statically typed languages where such errors only appear when the application is run. In any case, the key is to have a feedback mechanism. Claude should read the syntax errors, adjust and iterate until the error is fixed. Similarly, you should ask Claude to write a test for your landscape/portrait mode bug and have it make changes until the test passes.

kingkawnyesterday at 10:29 PM

I’ve found a problem with LLMs in general is that it is trying to mirror the user. If the user is a world class software dev you will get some good stuff out of it. If the user is not experienced at programming you will get something that resembles that out of it.

ltbarcly3yesterday at 10:03 PM

I think at the current state of the art, LLM tools can help you build things very quickly, but they can't help you build something you yourself are incapable of building, at least not in a sustained way. They need hand holding and correction constantly.

show 1 reply
kissgyorgyyesterday at 9:56 PM

You need to be very specific about what to build and how to build it, what tools to use, what architecture it should do, what libraries, frameworks it should include. You need to be a programmer to be able to do this properly and it still takes a lot of practice to get it right.

exe34yesterday at 9:47 PM

could you share an md of your prompts? I find with those tools I still have to break the problem down into verifiable pieces, and only move on to the next step once the previous steps are as expected.

syntax error is nothing, I just paste the error into the tui and it fixes it usually.

52-6F-62yesterday at 10:23 PM

Ah.

There used to be more or less one answer to the question of "how do I implement this UI feature in this language"

Now there are countless. Welcome to the brave new world of non-deterministic programming where the inputs can produce anything and nothing is for certain.

Everyone promises it can do something different if you "just use it this way".

rwaksmunskiyesterday at 10:05 PM

AI seems to work a lot better once you acquire some AI equity, you go from not working at all to AI writing all the code. /s

semiinfinitelyyesterday at 10:00 PM

skill issue

baqyesterday at 9:45 PM

it's a tool, not an oracle. you build with it, you aren't its customer, you're its wielder.

for now, anyway.

aristofunlast Friday at 10:17 PM

If you expect it to _do_ things for you - you're setting yourself up for failure.

If you treat it as an astonishingly sophisticated and extremely powerful autocomplete (which it is) - you have plenty of opportunities to make your life better.

show 2 replies