logoalt Hacker News

Appearing Productive in the Workplace

288 pointsby diebillionairestoday at 4:18 PM94 commentsview on HN

Comments

wcfroberttoday at 6:23 PM

> "Requirements documents that were once a page are now twelve. Status updates that were once three sentences are now bulleted summaries of bulleted summaries. Retrospective notes, post-incident reports, design memos, kickoff decks: every artifact that can be elongated is, by people who do not read what they produce, for readers who do not read what they receive."

Great article. The "elongation" of workplace artifacts resonated with me on such deep level. Reminded me of when I had to be extra wordy to meet the 1000 minimum word limit for my high school essays. Professional formatting, length, and clear prose are no longer indicators of care and work quality (they never were, but in the past, if someone drafts up a twelve page spec, at least you know they care enough to spend a lot of time on it).

So now the "productivity-gain bottleneck" is people who still care enough to review manually.

show 2 replies
proofofcontempttoday at 5:51 PM

What is described here closely resembles my experience too.

My company is full of managers who haven't written code in years. They hired an architect 18 months ago who used AI to architect everything. To the senior devs it was obvious - everything was massively over engineered, yet because he used all the proper terminology he sounded more competent to upper management than the other senior managers who didn't. When called out, he would result to personal attacks.

After about 6 months, several people left and the ones who stayed went all in on AI. They've been building agentic workflows for the past 12 months in an effort to plug the gap from the competent members of staff leaving.

The result, nothing of value has been released in the past 18 months. The business is cutting costs after wasting massive amounts on cloud compute on poorly designed solutions, making up for it by freezing hiring.

show 10 replies
oxag3ntoday at 6:35 PM

Software Engineering seems to be quite unique to enable this due to few factors:

* Many software engineers didn't do real engineering work during their entire careers. In large companies it's even harder - you arrive as a small gear and are inserted into a large mechanism. You learn some configuration language some smart-ass invented to get a promo, "learn" the product by cleaning tons of those configs, refactoring them, "fixing" results in another bespoke framework by adjusting some knobs in the config language you are now expert in. Five years pass and you are still doing that.

* There are many near-engineering positions in the industry. The guy who always told how he liked to work with people and that's why stopped coding, another lady who always was fascinated by the product and working with users. They all fill in the space in small and large companies as .*M

* The train is slow moving, especially in large companies. Commit to prod can easily span months, with six months being a norm. For some large, critical systems, Agentic code still didn't reach the production as of today.

Considering above, AI is replacing some BS jobs, people who were near-code but above it suddenly enjoy vibe-coding, their shit still didn't hit the fan in slow moving companies. But oh man, it looks like a productivity boom.

ChrisMarshallNYtoday at 6:50 PM

I spent most of yesterday, deleting and replacing a bunch of code that was generated by an LLM. For the most part, the LLM's assistance has been great.

For the most part.

In this case, it decided to give me a whole bunch of crazy threaded code, and, for the first time, in many years, my app started crashing.

My apps don't crash. They may have lots of other problems, but crashing isn't one of them. I'm anal. Sue me.

For my own rule of thumb, I almost never dispatch to new threads. I will often let the OS SDK do it, and honor its choice, but there's very few places that I find spawning a worker, myself, actually buys me anything more than debugging misery. I know that doesn't apply to many types of applications, but it does apply to the ones I write.

The LLM loves threads. I realized that this is probably because it got most of its training code from overenthusiastic folks, enamored with tech.

Anyway, after I gutted the screen, and added my own code, the performance increased markedly, and the crashes stopped.

Lesson learned: Caveat Emptor.

nlawalkertoday at 5:27 PM

>People who cannot write code are building software. People who have never designed a data system are designing data systems. Most of it is not shipped; it is built, often for many hours, possibly shown internally with great vigor, used quietly, and occasionally surfaced to a client without much fanfare.

This made me think of How I ship projects at big tech companies[1], specifically "Shipping is a social construct within a company. Concretely, that means that a project is shipped when the important people at your company believe it is shipped."

[1] https://news.ycombinator.com/item?id=42111031

show 2 replies
john_strinlaitoday at 5:06 PM

>I sat with it for a while, weighing whether to debate someone who was visibly copy-pasting verbatim from a model.

i have found some small amusement by responding in kind to people that do this (copy/pasting their ai output into my ai, pasting my ai response back). two humans acting as machines so that two machines can cosplay communicating like humans.

show 2 replies
vachinatoday at 5:44 PM

> Never ask a model for confirmation; the tool agrees with everyone.

Ditto. LLMs will somehow find fault in code that I know is correct when I tell it there’s something arbitrarily wrong with it.

Problem is LLMs often take things literally. I’ve never successfully had LLMs design entire systems (even with planning) autonomously.

show 1 reply
drowntogetoday at 6:18 PM

"Output-competence decoupling" is my new favorite keyword.

bambaxtoday at 6:11 PM

I intensely agree with everything that's being said in TFA; this however could be nuanced:

> Never ask a model for confirmation; the tool agrees with everyone

If asked properly, LLMs can be used to poke holes in an existing reasoning or come up with new ideas or things to explore. So yes, never ask a model for confirmation or encouragement; but you can absolutely ask it to critique something, and that's often of value.

show 2 replies
randusernametoday at 6:57 PM

> The cost of producing a document has fallen to nearly zero; the cost of reading one has not, and is in fact rising, because the reader must now sift the synthetic context for whatever the document was originally about.

This resonates. It's a spectacular full-reversal kind of tragedy because it used to be asymmetric the other way. Author puts in 10 effort points compiling valuable information and reader puts in 1 effort points to receive the transmission.

juancntoday at 5:28 PM

AI can be (and often is) a confident incompetence amplifier.

darepublictoday at 5:57 PM

I was tasked with coming up with a solution in 5 weeks which took another firm six months to produce. Never used agentic coding so much before or knew my code less well. Requirements are garbage though ,vague and just "copy what these other guys did, but better". I tried for. Couple of the weeks to get better specs but eventually gave up and just started building stuff to present.

xXSLAYERXxtoday at 7:13 PM

Who cares? I obviously didn't like the article.

> Schemes were all wrong

Why'd you let him run wild for two months? What software org would let anyone, even principle do that? Wouldn't the very first thing you'd do is review the guys schema? This reads like all the other snarky posts on HN about how everyone is punching above their pay grade and people who are much more advanced in some space just watch like two trains colliding.

I'll tell you what is productive in the workplace. Communication. That is it. Communicate and lift the guy up, give the guy a running start instead of chilling in the break room snarking with all your snarky co-workers.

giantg2today at 6:29 PM

The most productive people seem to be the ones who are skeptical of AI but found compelling cases to use them for and aren't afraid to correct them.

show 1 reply
guizadillastoday at 4:53 PM

Sidenote: why is the post dated in the future? (May 28, 2026)

show 2 replies
jdw64today at 5:16 PM

After reading this article, I can definitely feel how productivity rises inside organizations.

More precisely, this feels like a person who would be loved by management. The article almost reads like a practical manual for increasing perceived productivity inside a company.

The argument is repetitive:

1. AI generates convincing-looking artifacts without corresponding judgment. 2. Organizations mistake those artifacts for progress. 3. Managers mistake volume for competence.

The article explains this same structure several times. In fact, the three main themes are mostly variations of the same claim: AI allows people to produce output without having the competence to evaluate it.

The problem is that the article is criticizing a context in which one-page documents become twelve-page documents, while containing the same problem in its own form.

The references also do not seem to carry much real argumentative weight. They mostly decorate an already intuitive workplace complaint with academic authority. This is something I often observe in organizations: find a topic management already wants to hear about, repeat the central thesis, and cite a large number of studies that lean in the same direction.

There is also an irony here. The article criticizes a certain kind of workplace artifact, but gradually becomes very close to that artifact itself. This kind of failrue criticizing a pattern while reproducing it seems almost like a recurring custom in the programming industry.

Personally, I almost regret that this person is not in the same profession as me. If someone like this had been a freelancer, perhaps the human rights of freelancers would have improved considerably.

show 2 replies
smokeltoday at 5:56 PM

It would be nice if someone invented a mouse with a tiny motor inside, so I could put on sunglasses, rest my hand on the mouse, doze off, and still look like I'm working hard.

show 1 reply
sergiotapiatoday at 7:28 PM

> Requirements documents that were once a page are now twelve. Status updates that were once three sentences are now bulleted summaries of bulleted summaries.

I've been on the receiving end of this and it sucks. It shows lack of care and true discernment. Then you push back and again, you're arguing with Claude, not the person.

I don't know what the solution is here. :(

show 1 reply
cwillutoday at 7:18 PM

We were promised GlaDOS, and were given Wheatley.

asdfman123today at 6:47 PM

AI is another development that drives me absolutely mad. It's like jet fuel for people who leave a trail of technical debt for people who care more about that sort of thing to try to clean up.

AI promises "you don't even need to understand the problem to get work done!" But the problem is doing the work is the how I understand problems, and understanding the problem is the bottleneck.

sixie6etoday at 5:49 PM

So essentially, AI is exacerbating the Dunning-Kruger effect in society.

snozollitoday at 5:16 PM

Back around 2005, I worked with a guy who was trying to position himself as the go-to expert on the team. He'd always jump at the chance to explain things to QA and the support team. We'd occasionally hear follow-up questions from those teams and realize that he was just making things up.

He was also had a serious case of cargo-cult mentality. He'd see some behavior and ascribe it to something unrelated, then insist with almost religious fervor that things had to be coded in a certain way. He was also a yes-man who would instantly cave to whatever whim management indicated. We'd go into a meeting in full agreement that a feature being requested was damaging to our users, and he'd be nodding along with management like a bobble-head as they failed to grasp the problem.

Management never noticed that he was constantly misleading other teams, or that he checked in flaky code he found on the Internet that triggered multiple days of developer time to debug. They saw him as a highly productive team player who was always willing to "help" others.

He ended up promoted to management.

Anyway, my point is that management seems to care primarily about having their ego boosted, and about seeing what they perceive as a hard worker, even if that worker is just spinning his wheels and throwing mud on everyone else. I'm sure that AI is only going to exacerbate this weird, counter-productive corporate system.

show 3 replies
ahmedmostafa16today at 6:47 PM

[dead]

micoul81today at 6:27 PM

i need karma

fallinditchtoday at 6:08 PM

Increasingly, there is a disconnect between established operational/corporate systems and the new AI-enhanced powers of individual workers.

The over-production of documents is just one symptom. It's clear that organizations are struggling to successfully evolve in the era of worker 'superpowers'. Probably because change is hard!

Perhaps this is indicative of a failure of imagination as much as anything? The AI era is not living up to its potential if workers are given superpowers, but they are not empowered to use them effectively.

Empowered teams and individuals have more accountability and ownership of business outcomes - this points to a need for flatter hierarchies and enlightened governance, supported by appropriate models of collaboration and reporting (AI helps here too!).

In the OP article the writer IMHO reached the wrong conclusion about their colleague who built a system that didn't work - this sounds like the sort of initiative that should be encouraged, and perhaps the failure here points to a lack of technical support and oversight of the colleague's project.

Now more than ever organizations need enlightened leadership who have flexible mindsets and who are capable to envisioning and executing radicle organizational strategies.