- Claude, please optimise the project for performance.
o Claude goes away for 15 minutes, doesn't profile anything, many code changes.
o Announces project now performs much better, saving 70% CPU.
- Claude, test the performance.
o Performance is 1% _slower_ than previous.
- Claude, can I have a refund for the $15 you just wasted?
o [Claude waffles], "no".
If you provide it a benchmark script (or ask it to write one) so it has concrete numbers to go off of, it will do a better job.
I'm not saying these things don't hallucinate constantly, they do. But you can steer them toward better output by giving them better input.
While you’re making unstructured requests and expecting results, why don’t you ask your barista to make you a “better coffee” with no instructions. Then, when they make a coffee with their own brand of creativity, complain that it tastes worse and you want your money back.
The last bit, in my limited experience:
> Claude: sorry you have to want until XX:00 as you have run out of credit.
If you really want to do this, you should probably ask for a plan first and review it.
You need to let it actually benchmark. They are only as good as the tools you give them.
I can't help but notice that your first two bullets match rather closely the behavior of countless pre-AI university students assigned a project.
I’ve always found the hard numbers on performance improvement hilarious. It’s just mimicking what people say on the internet when they get performance gains