[flagged]
I know, it's weird. I'm really excited about it, but somehow there's a bunch of people on here who are largely negative about it all and make weirdly misinformed statements, it's like they haven't really tried it, they think they have, but it's often obvious they tried it early last year or didn't really get it. I think minimum you need to vibe code a piece of software of some sort. Don't write a single line of code just prompt your way to a finished product. This is going to tell you a lot. First, thing you start realizing, your approach to prompting makes a big difference to what you get. You really need to think about the design/feature set of your program you are making. For me, it just became plainly obvious coding was more of a hinderance and I was too busy thinking about the application itself, features and how they should work together. Not that it's all happy days, still problems occur at the code level. But once you get to this stage you start having a really clear idea of its strengths and weaknesses. For me, I actually find myself thinking a lot more, having a lot more ideas, experimenting more and iterating a lot faster.
You live in a bubble for sure.
The vast majority of people don’t care about AI. If they prefer chatgpt over search it’s largely because the quality of search has been downgraded in favour of increasing revenue - whereas it’s in the up starts best interest to try and give the user exactly what they want.
> I thought people here got excited about technology.
I am passionately excited about technology that serves people, but the current hype around AI is not that.
LLMs are, fundamentally, a super cool development. That we can now generate large regions of text that are statistically likely to be perceived as accurate is phenomenally neat.
But that's all they are: statistical text prediction engines. They do not think or reason; they generate next tokens based on prior context[1]. Unfortunately, because they predict really convincing text, a lot of people are willing to believe them to be more capable than they are. As a consequence, the companies developing these technologies have chosen to lean into rhetoric that exacerbates these beliefs because it means more money for them. The people running these companies fundamentally do not care about the human experience when they could instead care about profits[2].
AI, as an industry, is mostly hype — and it's a particularly insidious form of hype that preys on people's desire for cool, new things at the expense of our collective long-term well-being. The exponential ramp-up of AI data centers is wreaking havoc on natural ecosystems and local communities; the humans "employed" for RLHF are severely underpaid and exploited, and are mostly powerless to make other choices; the damage to our digital infrastructure and community is visible daily (how many "are you a human?" captchas do you get today versus ten years ago? how many useless pull requests do repositories big and small receive on a daily basis? how many "look I vibe coded a shitty app in three days and it's riddled with bugs, let me post about it" posts do we see now?). And this is all to say nothing of the rapidity with which huge swaths of people have just decided they don't care about other people fundamentally; they'd rather AI-generate some garbage company logo than employ a graphic designer to do a better job; they'd rather AI-generate copytext rather than hire an editor; they'd rather reach for the cheap, built-from-the-labor-of-others-without-respect-to-them tool that outsources all creativity and effort and gives them an immediately available "eh, it's good enough" solution. Before long, we will be inundated with "good enough", and we will forget what it was like to have good.
I'm excited about technology. I am not excited about the current incarnation of this technology.
[1] I am fundamentally not interested in sophistic arguments that "this is how humans work!" We don't know how humans work, so I make a choice to maintain a belief — based on my own experiences and learning — that LLMs do not accurately reflect the workings of a human brain.
[2] See "Empire of AI" by Karen Hao.
>What's with all the anti-AI sentiment here? Is it a bunch of unemployed devs?
so what? if you aren't part of inflating the hype beast you're a victim of it. Eventually no one will be left to hype it because we'll all have lost the battle.
This site slid into pessimism like ten years ago
> Is it a bunch of unemployed devs?
probably quite the opposite
> I don't really worry about my skills atrophying
well some folks probably do, which is why they seem "anti-AI" to you (I certainly do care about my skills atrophying, and it's the reason I don't use "AI")
> excited about technology
there is a difference between being excited about technology and falling into marketing traps