what I see alot is that the syntax and overall code architecture is text book, but its the completely wrong approach that creates extremely complicated tech debt. All the code reviews will be on the syntax, and none on the big picture of the business problem, or whether the implementation is overcomplicated.
in the short run (1-2 years) there is no repercussion for this, but eventually making changes will be extremely risky and complicated. The individuals that built the software will lord over everyone else with their arcane knowledge of this big pile of junk
Totally agree, I've found that as well working in big tech
People focus way to much on the superficial stuff like code cleanliness, formatting, organization, local structure of the code
Because that stuff is easy to talk about, kind of like bikeshedding.
Plus a lot of times code reviewers just want to comment something to show they aren't just rubber stamping it.
Whereas it takes a lot more brain power to think about logic, correctness, and "does the change actually make sense in the big picture"
Part of it too is that as a reviewer a lot of times you just don't have enough context to know if the change makes sense
Its hard to have good enough requirements gathering and documentation and product design practices to let an engineer really wrap their head around a problem well enough to come up with and then consistently follow a thoughtful, long-term-maintainable design for a system during implementation.
And its even harder to make sure everyone who reviews or tests that code has a similar level of understanding about the problem the system is trying to solve to review code or test for fitness for purpose, and challenge/validate the design choices made.
And its perhaps hardest of all to have an org-wide planning or roadmap process that can be tolerant of that well-informed peer reviewer or tester actually pushing back in a meaningful way and "delaying" work.
Thats not to say that this level of shared understanding in a team isn't possible or isn't worth pursuing: but it IS a hard thing to do and a relatively small number of engineering organizations pull it off consistently. Some view it as an unacceptable level of overhead and don't even try. But most, in my experience, hope that enough of the right things happen on enough of the right projects to keep the whole mess afloat.
Catching architecture problems in code review is usually a red flag for process problems. Anything substantial should have been reviewed for architecture prior to a code review, especially if it spans multiple commits. Code review should feel rote and focus on rubrics around style and best practices in an ideal case. Of course you will still find architectural issues during code reviews in many cases but that shouldn't be often as it's not reliable to expect the reviewer to have the necessary context to catch them.
I've seen too much of the same. It strikes me that the pattern you describe also matches a lot of AI generated code I see, especially when it's big chunks of generated code. Are we automating this problem and going all-in on the long term costs?
It's getting much worse with AI now too. People just blindly trust the AI's decisions which even as of Opus 4.5 is generally misguided for nontrivial problems and best-case doesn't even consider the bigger picture given context-window limitations.
So the mountain of syntactically correct functional slop is growing faster than ever before.
100% this. Stuff like database schemas gets comitted in the first sprint and never gets refactored, which completely locks you in to long term design decisions, then every subsequent PR will get held up for days in arguments around meaningless "code quality" arguments which ultimately affect nothing