The blog post has a bunch of charts, which gives it a veneer of objectivity and rigor, but in reality it's just all vibes and conjecture. Meanwhile recent empirical studies actually point in the opposite direction, showing that AI use increases inequality, not decrease it.
https://www.economist.com/content-assets/images/20250215_FNC...
https://www.economist.com/finance-and-economics/2025/02/13/h...
Yeah, the graphs make some really big assumptions that don't seem to be backed up anywhere except AI maximalist head canon.
There's also a gap in addressing vibe coded "side projects" that get deployed online as a business. Is the code base super large and complex? No. Is AI capable of taking input from a novice and making something "good enough" in this space? Also no.
In a sense I agree. I don't necessarily think that it has to be the case, but I got that same feeling of that it was wearing a white lab coat to be a scientist. I think their honest attempt was to express the relationship of how they perceive things.
I think this could still be used as a valuable form of communication if you can clearly express the idea that this is representing a hypothesis rather than a measurement. The simplest would be to label the graphs as "hypothesis". but a subtle but easily identifiable visual change might be better.
Wavy lines for the axis spring to mind as an idea to express that. I would worry about the ability to express hypotheses about definitive events that happen when a value crosses an axis though, You'd probably want a straight line for that. Perhaps it would be sufficient to just have wavy lines at the ends of the axes beyond the point at which the plot appears.
Beyond that. I think the article presumes the flattening of the curve as mastery is achieved. I'm not sure that's a given, perhaps it seems that way because we evaluate proportional improvement, implicitly placing skill on a logarithmic scale.
I'd still consider the post from the author as being done in better faith than the economist links.
Id like to know what people think, and for them to say that honestly. If they have hard data, they show it and how it confirms their hypothesis. At the other end of the scale is gathering data and only exposing the measurements that imply a hypothesis that you are not brave enough to state explicitly.
The graphic has four studies that show increased inequality and six that show reduced inequality.
Thanks for the links. That should be obvious to anyone who believes that $70 billion datacenters (Meta) are needed and the investment will be amortized by subscriptions (in the case of Meta also by enhanced user surveillance).
The means of production are in a small oligopoly, the rest will be redundant or exploitable sharecroppers.
(All this under the assumption that "AI" works, which its proponents affirm in public at least.)
Yup. As a retired mathematician who craves the productivity of an obsessed 28 year old, I've been all in on AI in 2025. I'm now on Claude's $200/month Max plan in order to use Claude Code Opus 4 without restraint. I still hit limits, usually when I run parallel sessions to review a 57 file legacy code base.
For a time I refused to talk with anybody or read anything about AI, because it was all noise that didn't match my hard-earned experience. Recently HN has included some fascinating takes. This isn't one.
I have the opinion that neurodivergents are more successful using AI. This is so easily dismissed as hollow blather, but I have a precise theory backing this opinion.
AI is a giant association engine. Linear encoding (the "King - Man + Woman = Queen" thing) is linear algebra. I taught linear algebra for decades.
As I explained to my optometrist today, if you're trying to balance a plate (define a hyperplane) with three fingers, it works better if your fingers are farther apart.
My whole life people have rolled their eyes when I categorize a situation using analogies that are too far flung for their tolerances.
Now I spend most of my time coding with AI, and it responds very well to my "fingers farther apart" far reaching analogies for what I'm trying to focus on. It's an association engine based on linear algebra, and I have an astounding knack for describing subspaces.
AI is raising the ceiling, not the floor.
I'm honestly tired of all the misinformation about AI being posted.
You are correct. Its not hard to see why, (AI imposes cost interference), but there are a lot of bots that keep promoting slop, and moderation doesn't seem to be doing anything about it.
I'm tired of seeing a significant percentage of the article posts in the top 300 being slop.
[dead]
> inequality
It's free for everyone with a phone or a laptop.
Of course AI increases inequality. It's automated ladder pulling technology.
To become good at something you have to work through the lower rungs and acquire skill. AI does all those lower level jobs, puts the people who need those jobs for experience on the street, and robs us of future experts.
The people who benefit the most are those who are already up on top of the ladder investing billions to make the ladder raise faster and faster.