What makes me stupid is hearing about "AI" day after day like it is the best thing since sliced bread, and yet 99.9% of useful things that ive seen come from LLMs is just low level programming tasks or fluffed up nonsense that any manager could spew. I can't even trust what LLMs tell me unless the answer is so simple a 2015 google search's top result would be just as adequate. Except now the top 20 google results are all AI answers from the same source material, packed full of fluff but stripped entirely of nuance or useful adjacent knowledge. Just changing the question slightly can give contradictory answers with both given with full confidence.
Everything you think you know about AI was true until about 6 months ago. Now the frontier models and agentic tools are good at programming—better than most professional programmers would be unguided. And even if Claude Mythos isn't half as good as they say it is, it's changed the calculus of security significantly: use AI to vet your code before deployment... or someone else will, right before they 0wn you.
This is not true at all - I have been using the Pro level AIs to automate my 150k a year automation engineer job for over 2 years and have reduced my workload by about 95%, no joke (AI writes great selenium tests). This is a real, measurable amount of work - it used to be that you had to be pretty smart to write code and now anybody can vibe code an automation test framework in literally one afternoon. I know because I did it a few months ago for my new role. It is beyond game changing for the reason - I can only imagine what actually productive people are doing - this is a 100x productivity multiplier.
It doesn't even make mistakes anymore - the biggest issue is making sure it doesn't get lazy with the number of assertions