logoalt Hacker News

m4xyesterday at 11:03 PM2 repliesview on HN

What kind of work are you applying Opus and other LLMs to? I'm quite curious to understand how other people are using these tools.

At the moment neither Opus nor any open weights models seem to be capable of doing complex work, and for less complex work the additional cost of Opus hasn't been worthwhile. This is for reasonably math-heavy computer vision applications.

What LLMs have been useful for is identifying forgotten code that will be affected when planning a change, reviewing changes, and looking up docs/recipes for simple tasks. But Opus doesn't seem necessary for a lot of that.


Replies

chillfoxtoday at 12:27 AM

Not the one you were asking, but…

I have been using Opus (in zed) to find the “in between” bugs. Bugs that kinda live in the space between micro services or between backend and frontend.

It takes a bit of preparation to get good results, but it can usually find the source of bugs in 1-2 hours (200k-300k context) that would take me a week to track down.

I create a folder, and then open up git worktrees in sub folders for every repo I think might be involved. I also create an empty report.md file. Then I give it a prompt that starts with “I need you to debug an issue”, followed by instructions for how to run tests in each repo, followed by @mentioning any specific files or folders I think is relevant (quick description of what they are), then the bug description. After that I tell it to debug the issue, make no code changes and write its findings to the report.md file.

This works incredibly well.

SeanAndersontoday at 12:47 AM

My current job has me overseeing a few teams of engineers working on ~10+ y/o legacy software systems that have not been especially well maintained. As an example, one team had a completely broken CI pipeline due to numerous flaky tests. They had configured the CI pipeline to rerun tests multiple times and still the master branch had like.. a 40% pass rate. Super ugly, but the suite took ~40 minutes to run and they were demoralized enough to not want to investigate it anymore.

I came in, set Claude up, gave it read access to CI artifacts, had it build out some tooling to monitor the rolling pass/fail rate over the last 30 days, and let it loose. It identifies the worst offending flaky tests, forms hypotheses on whether it's a testing issue or a production issue, then tries to divide-and-conquer until it gets minimal reproduction steps. If it's not able to create deterministic reproduction then it'll make a best guess at fixing the issue and grind away at test re-runs all night until it can try to figure out if it fixed the issue with statistical confidence instead.

It's not perfect. I have to throw away some of the bad solutions, but shaved 20 minutes off their pipeline and improved pass rate by 35% in a handful of weeks. Very minimal oversight on my part - just letting it run while I'm asleep and reviewing PR proposals during the day between meetings.

We have an initiative to make an entire web application significantly more accessible in response to some government mandates. Tight deadline, tons of grunt work, repetitive patterns, some small nuances on edge-cases. The team was able to create a set of skills for doing the conversion logic, slowly build up and address all the edge cases, and are now able to work several magnitudes more quickly in modernizing the app.

A team had punted repeatedly on updating Jest to the latest version because it inherently came with a breaking change to JSDOM which made some properties unable to be spied upon. Took like 20 minutes to have Claude one-shot the entire conversion when they'd ignored it for months because it just felt too finicky prior to agents. In general, everything to do with testing infrastructure is easy to push forward with confidence.

Uhm, we have an active interview pipeline where we give a take-home technical assessment. After we got a few submissions, and manually evaluated them, I fed our analyses in and our grading rubric and had it generate assessments for incoming candidates following the rubric. After checking a few pretty carefully it became clear that it was good enough to trust - the take home wasn't groundbreaking and the problem space was understood enough to be able to identify obvious issues if there were any.

I was given a small team of semi-technical people who were being used to fetch numbers from DBs for product/marketing/sales and perform light data analysis on them. A lot of their day to day was just paper pushing SQL queries into Excel spreadsheets and then transforming them into PowerPoints with key takeaways. They didn't have any experience writing code. I had Claude build a gameified playground for them where I gave them a VSCode dev container, a SQLite DB full of synthetic data emulating what they'd encounter IRL, and a Jupyter notebook filled with questions they'd need to answer by writing code to interrogate the database and form insights. In a couple of weeks I was able to get them to the point where they were comfortable writing basic Python scripts with the help of Claude and they're now off automating all their paper-pushing workflows with deterministic scripts. When they're done we're going to move them to higher value work by having them do sleuthing against our data and surfacing proactive insights to propose to Product rather than just reactively fetching data and building reports.

I was asked to quickly build a prototype for some basic AI functionality we thought we might want to add to one of the products. I was able to go from "I have no idea what I should build" to "here's a prototype we can put in front of clients and see if this idea has any merit" in about 14 hours. Just riffing with Claude from product idea to functional/technical specs, implementation plan, then full working prototype was one shot, and then a tight iteration loop for a couple of hours with me guiding it on personal aesthetic choices to give it enough final polish. Obviously I wouldn't ship this code into production, but it's really nice not having any sunken cost biases when demoing a prototype. If customers don't like it? Great, I lost one day and half the time I was multi-tasking while Claude implemented specs. Even better - I had Claude write a script to extract all the conversations I had with it and include those in the prototype repo. Then I filmed a quick demo video of my process, shared that with the engineers, and they're able to review my Claude conversations to get inspiration for how to modify their own agentic coding strategies.

show 1 reply