logoalt Hacker News

Haskell for all: Beyond agentic coding

95 pointsby RebelPotatotoday at 1:55 AM27 commentsview on HN

Comments

andaitoday at 6:54 AM

I wonder if the problem of idle time / waiting / breaking flow is a function of the slowness. That would be simple to test, because there are super fast 1000 tok/s providers now.

(Waiting for Cerebras coding plan to stop being sold out ;)

I've used them for smaller tasks (making small edits), and the "realtime" aspect of it does provide a qualitative difference. It stops being async and becomes interactive.

A sufficient shift in quantity produces a phase shift in quality.

--

That said, the main issue I find with agentic is my mental model getting desynchronized. No matter how fast the models get, it takes a fixed amount of time for me to catch up and understand what they've done.

The most enjoyable way I've found of staying synced is to stay in the driver's seat, and to command many small rapid edits manually. (i.e. I have my own homebrew "agent" that's just a loop of, I prompt it, it proposes edits, I accept or edit, repeat.)

So then the "synchronization" of the mental state is happening continuously, because there is no opportunity for desynchronization. Because you are the one driving. I call that approach semi-auto, or Power Coding (akin to Power Armor, which is wielded manually but greatly enhances speed and strength).

show 1 reply
Zakodiactoday at 5:39 AM

I agree agents can break flow but I think the bigger issue is they hide too much, not that they're too intrusive.

Most agent tools right now don't give you good visibility into what sub-agents are doing or what decisions they're making. You zoom out, let it run, come back to a mess. Tools like OpenCode and Amazon's CLI Agent Orchestrator are trying to fix this - letting you watch what each agent is actually doing and step in to correct or redirect.

OpenCode actually removed the ability to message sub-agents directly. I get why - people would message one after it finished, the conversation would fork off, and the main orchestrator lost track. But I don't love that fix because being able to correct or pivot a sub-agent before it finishes was genuinely useful. They bandaided a real problem by removing a good feature.

Honestly the model that works best for me is treating agents like junior devs working under a senior lead. The expert already knows the architecture and what they want. The agents help crank through the implementation but you're reviewing everything and holding them to passing tests. That's where the productivity gain actually is. When non-developers try to use agents to produce entire systems with no oversight that's where things fall apart.

So I wouldn't want agent tools to be "calm" and fade into the background. I want full transparency into what they're doing at all times because that's how you catch wrong turns early. The tooling is still early and rough but it keeps getting better at supporting experts rather than trying to replace them.

show 1 reply
benobtoday at 8:51 AM

I really like the "file lens" example:

> “Focus on…” would allow the user to specify what they're interested in changing and present only files and lines of code related to their specified interest.

> “Edit as…” would allow the user to edit the file or selected code as if it were a different programming language or file format.

Insanitytoday at 3:34 AM

Post had nothing to do with Haskell so the title is a bit misleading. But rest of article is good, and I actually think that Agentic/AI coding will probably evolve in this way.

The current tools are the infancy of AI assisted coding. It’s like the MS-DOS era. Over time maybe the backpropagating from “your comfort language” to “target language” could become commonplace.

show 4 replies
kstenerudtoday at 7:36 AM

What I've found is that most people who dislike the chat interface aren't using it in a way that leverages its strengths.

Up until recently, LLMs just plain sucked. You'd set them on a task and then spend hours hand-holding them to output something almost correct.

Nowadays you can have a conversation with the chatbot, hash out a design, rubber duck and discuss what-ifs until you have a solid idea of the thing you're building, codified in a way an agent could understand, and now you have a PLAN.

From there, it's a matter of setting the agent in motion and checking from time to time to make sure it's not getting stuck on something under-specified.

That said, I've found that this kind of workflow works a lot better with claude than with gemini.

tossandthrowtoday at 7:54 AM

I whole heartedly prefer chat interfaces over inline ai suggestions.

I find the inline stuff so incredibly annoying because they move around the text I am looking at.

show 1 reply
wazHFsRytoday at 6:02 AM

I have the same feeling recently that we should focus more on using AI to enable us, to empower us to do the important things. Not take away but enhance, boring , clear boilerplate yes, design decisions no. And making reviewing easier is a perfect example of enhancing our workflow. Not reviewing for us, but supporting us.

I am recently using this tiny[1] skill to generate an order on how to review a PR and it has been very helpful to me.

https://www.dev-log.me/pr_review_navigator_for_claude/

eigenblaketoday at 7:24 AM

I have been considering what it would be like to give each function name a specific color and a color for each variable's type followed by a color derived from the hash of the symbol name and keywords would each be their specific type. And essentially printing a matrix of this, essentially transforming your code into a printable matrix "low-lod" or "mipmap" form. This could be implemented like the VSCode minimap but I the right move here is to implement it as a hook that can modify the output of your agent. That way you can look at the structure of the code without reading the names in particular.

roughlytoday at 6:57 AM

The “Calm technology” thing always annoys me, because it skips every economic, social, and psychological reason for the current state of affairs and presents itself as some kind of wondrous discovery, as opposed to “the way things were before we invented the MBA.” A willing blindness to predators doesn’t provide a particularly useful toolkit.

show 1 reply
OutOfHeretoday at 3:38 AM

Agentic coding doesn't make any sense for a job interview. To do it well requires a detailed specification prompt which can't reliably be written in an interview. It ideally also requires iterating upon the prompt to refine it before execution. You get out of it what you put into it.

show 4 replies