I don't understand this critique. (1) Did you previously think you weren't getting paid for doing what a company wants you to do, aka what THEY thought was productive? (2) Do you think all this AI generated code is useless?
Edit: y'all are some whiney folk, ain't ya?
To answer your second question: Yes, much of it is worse than useless. The tools need guidance to produce useful output. If you use it poorly, you will get garbage output that may do more harm than good.
And your response does not address the point being made in the comment you replied to: Many people are being evaluated by how many tokens they burn, which is about as good a metric as lines of code written.
1) I think if the company I work for spends too much effort on things that aren't going to make money, they won't be able to pay me anymore, no matter what they "think" is productive. That's not how executives at companies like this make decisions, though.
2) Mostly, yes.
I think parent is saying "% of code being generated by AI" is not a generally good, direct metric for business value. It's akin to the "we are pushing SO MUCH CODE" phase of early ai marketing.
If we're trying to measure the value of adopting tool, it's probably better to measure the ROI of that tool rather than the usage % of that tool, especially when usage is basically mandated.
To directly answer your questions:
1. You're being paid to create value for the business, which "doing what they think is productive" is a proxy for. You're not being paid to use a tool a high % of the time.
2. I doesn't seem like parent even commented on the quality of the code generated. I think anyone that uses it regularly can agree that: a) the code is not useless and b) all generated code is not immediately production ready c ) AI generation of code is an accelerant for software development
Goodhart's Law isn't a problem immediately. If you want more code to be written, and the only feasible way to write it to goals is to heavily use AI, then you might run into the problems of AI-generated code, and an infrastructure that's poorly architected and much less understood than it would've been ten years ago.
Not OP, but:
1. At my level, the company is not just paying me to do a task the way they want it done, they are paying for my experience to orchestrate the best way to do it. They want an outcome, and I'm responsible for figuring out how to get to that outcome with the right balance of cost, correctness, etc. But yes, the most dystopian reality is what you said.
2. It's not useless, but the AI generated code is absolutely lower quality than what I would have written myself, but there is no desire to clean it up. Companies have always had a disastrously bad understanding of technical debt and they finally have tool they can shove down developers throats that trades even more velocity for even less quality. They're going to take that trade every single time.
> (1) ...getting paid for doing what a company wants you to do...?
At my previous company, when the thing they thought they wanted me to do (which was not the thing they actually wanted... but whatever) diverged from my values I quit. You can just do things.
> (2) Do you think all this AI generated code is useless?
Almost universally, yes. Especially in organizations that historically haven't been particularly careful about hiring and have a huge number of young, inexperienced people. There are exceptions but they're rare enough that throwing that particular baby out with the bathwater isn't a big loss.
you're missing their point; LLM use is often a part of your evaluation at some of these larger companies and they expect you to use them heavily or you will get a lashing
GP just saying that any metric will be gamed and if you have some costs that is associated to that, it will grow. Let’s say you set some metric that says the most productive dev are the ones that has the most files changes, you can soon expect every function and structure to be its own file. Same if you say that sales commision are based on how much time you spend calling, expect the phone bills to grow a lot.
I think the point was that, when you make a metric goal of "you must use AI this much", then people will use AI even in ways that isn't adding to productivity.