logoalt Hacker News

johnsmith1840yesterday at 10:04 PM6 repliesview on HN

I don't get the point. Point it at your relevent files ask it to review discuss the update refine it's understanding and then tell it to go.

I have found that more context comments and info damage quality on hard problems.

I actually for a long time now have two views for my code.

1. The raw code with no empty space or comments. 2. Code with comments

I never give the second to my LLM. The more context you give the lower it's upper end of quality becomes. This is just a habit I've picked up using LLMs every day hours a day since gpt3.5 it allows me to reach farther into extreme complexity.

I suppose I don't know what most people are using LLMs for but the higher complexity your work entails the less noise you should inject into it. It's tempting to add massive amounts of xontext but I've routinely found that fails on the higher levels of coding complexity and uniqueness. It was more apparent in earlier models newer ones will handle tons of context you just won't be able to get those upper ends of quality.

Compute to informatio ratio is all that matters. Compute is capped.


Replies

Aurornisyesterday at 10:49 PM

> I have found that more context comments and info damage quality on hard problems.

There can be diminishing returns, but every time I’ve used Claude Code for a real project I’ve found myself repeating certain things over and over again and interrupting tool usage until I put it in the Claude notes file.

You shouldn’t try to put everything in there all the time, but putting key info in there has been very high ROI for me.

Disclaimer: I’m a casual user, not a hardcore vibe coder. Claude seems much more capable when you follow the happy path of common projects, but gets constantly turned around when you try to use new frameworks and tools and such.

show 3 replies
schrodingertoday at 4:00 AM

Genuinely curious — how did you isolate the effect of comments/context on model performance from all the other variables that change between sessions (prompt phrasing, model variance, etc)? In other words, how did you validate the hypothesis that "turning off the comments" (assuming you mean stripping them temporarily...) resulted in an objectively superior experience?

What did your comparison process look like? It feels intuitively accurate and validates my anecdotal impression but I'd love to hear the rigor behind your conclusions!

Mtinieyesterday at 10:40 PM

> 1. The raw code with no empty space or comments. 2. Code with comments

I like the sound of this but what technique do you use to maintain consistency across both views? Do you have a post-modification script which will strip comments and extraneous empty space after code has been modified?

show 2 replies
nightskiyesterday at 10:18 PM

IMO within the documentation .md files the information density should be very high. Higher than trying to shove the entire codebase into context that is for sure.

show 1 reply
senshanyesterday at 10:31 PM

> I never give the second to my LLM.

How do you practically achieve this? Honest question. Thanks

show 1 reply
rayesterday at 10:18 PM

This is exactly right. Attention is all you need. It's all about attention. Attention is finite.

The more you data load into context the more you dilute attention.

show 1 reply