I've noticed a huge drop in negative comments on HN when discussing LLMs in the last 1-2 months.
All the LLM coded projects I've seen shared so far[1] have been tech toys though. I've watched things pop up on my twitter feed (usually games related), then quietly go off air before reaching a gold release (I manually keep up to date with what I've found, so it's not the algorithm).
I find this all very interesting: LLMs dont change the fundamental drives needed to build successful products. I feel like I'm observing the TikTokification of software development. I dont know why people aren't finishing. Maybe they stop when the "real work" kicks in. Or maybe they hit the limits of what LLMs can do (so far). Maybe they jump to the next idea to keep chasing the rush.
Acquiring context requires real work, and I dont see a way forward to automating that away. And to be clear, context is human needs; i.e. the reasons why someone will use your product. In the game development world, it's very difficult to overstate how much work needs to be done to create a smooth, enjoyable experience for the player.
While anyone may be able to create a suite of apps in a weekend, I think very few of them will have the patience and time to maintain them (just like software development before LLMs! i.e. Linux, open source software, etc.).
[1] yes, selection bias. There are A LOT of AI devs just marketing their LLMs. Also it's DEFINITELY too early to be certain. Take everything Im saying with a one pound grain of salt.
It could be that the people who are focused on building monetizable products with LLMs don't feel the need to share what they are doing - they're too busy quietly getting on with building and marketing their products.
Sharing how you're using these tools is quite a lot of work!
The type of people to use AI are necessarily the people who will struggle most when it comes time to do the last essential 20% of the work that AI can't do. Once thinking is required to bring all the parts into a whole, the person who gives over their thinking skills to AI will not be equipped to do the work, either because they never had the capacity to begin with or because AI has smoothed out the ripples of their brain. I say this from experience.
Deploying and maintaining something in a production-ready environment is a huge amount of work. It's not surprising that most people give up once they have a tech demo, especially if they're not interested in spending a ton of time maintaining these projects. Last year Karpathy posted about a similar experience, where he quickly vibe coded some tools only to realize that deploying it would take far more effort than he originally anticipated.
I think it's also rewarding to just be able to build something for yourself, and one benefit of scratching your own itch is that you don't have to go through the full effort of making something "production ready". You can just build something that's tailed specifically to the problem you're trying to solve without worrying about edge cases.
Which is to say, you're absolutely right :).
> huge drop in negative comments on HN when discussing LLMs
I interpret it more as spooked silence
Yeah, I do a lot of hobby game making and the 80/20 rule definitely applies. Your game will be "done" in 20% of the time it takes to create a polished product ready for mass consumption.
Stopping there is just fine if you're doing it as a hobby. I love to do this to test out isolated ideas. I have dozens of RPGs in this state, just to play around with different design concepts from technical to gameplay.
Sometimes I feel like a lot of those posts are instances of Kent Brockman: "I for one, welcome our new insect overlords."
Given the enthusiasm of our ruling class towards automating software development work, it may make sense for a software engineer to publicly signal how much onboard as a professional they are with it.
But, I've seen stranger stuff throughout my professional life: I still remember people enthusiastically defending EJB 2.1 and xdoclet as perfectly fine ways of writing software.
> I've noticed a huge drop in negative comments on HN when discussing LLMs in the last 1-2 months.
real people get fed up of debating the same tired "omg new model 1000x better now" posts/comments from the astroturfers, the shills and their bots each time OpenAI shits out a new model
(article author is a Microslop employee)