When you don't know the answer to a question you ask an LLM, do you verify it or do you trust it?
Like, if it tells you merge sort is better on that particular problem, do you trust it or do you go through an analysis to confirm it really is?
I have a hard time trusting what I don't understand. And even more so if I realize later I've been fooled. Note that it's the same with human though. I think I only trust technical decision I don't understand when I deem the risk of being wrong low enough. Overwise I'll invest in learning and understanding enough to trust the answer.
Often those kind of performance things just don't matter.
Like right now I am working on algorithms for computing heart rate variability and only looking at a 2 minute window with maybe 300 data points at most so whether it is N or N log N or N^2 is beside the point.
When I know I computing the right thing for my application and know I've coded it up correctly and I am feeling some pain about performance that's another story.
> I have a hard time trusting what I don't understand
Who doesn't? But we have to trust them anyway, otherwise everyone should get a PhD on everything.
Also for people who "has a hard time trusting", they might just give up when encountering things they don't understand. With AI at least there is a path for them to keep digging deeper and actually verify things to whatever level of satisfaction they want.
I tell it to write a benchmark, and I learn from how it does that.
For all these "open questions" you might have it is better to ask the LLM write a benchmark and actually see the numbers. Why rush, spend 10 minutes, you will have a decision backed by some real feedback from code execution.
But this is just a small part from a much grander testing activity that needs to wrap the LLM code. I think my main job moved to 1. architecting and 2. ensuring the tests are well done.
What you don't test is not reliable yet, looking at code is not testing, it's "vibe-testing" and should be an antipattern, no LGTM for AI code. We should rely on our intuition alone because it is not strict enough, and it makes everything slow - we should not "walk the motorcycle".