logoalt Hacker News

pkastingtoday at 8:11 PM7 repliesview on HN

Ex-Google here; there are many people both current and past-Google that feel the same way as the composite coworker in the linked post.

I haven't escaped this mindset myself. I'm convinced there are a small number of places where LLMs make truly effective tools (see: generation of "must be plausible, need not be accurate" data, e.g. concept art or crowd animations in movies), a large number of places where LLMs make apparently-effective tools that have negative long-term consequences (see: anything involving learning a new skill, anything where correctness is critical), and a large number of places where LLMs are simply ineffective from the get-go but will increasingly be rammed down consumers' throats.

Accordingly I tend to be overly skeptical of AI proponents and anything touching AI. It would be nice if I was more rational, but I'm not; I want everyone working on AI and making money from AI to crash and burn hard. (See also: cryptocurrency)


Replies

CSMastermindtoday at 9:19 PM

My friends at Google are some of the most negative about the potential of AI to improve software development. I was always surprised by this and assumed internally at Google would be one of the first places to adopt these.

show 3 replies
hectdevtoday at 8:19 PM

It's the latest tech holy war. Tabs vs Spaces but more existential. I'm usually anti hype and I've been convinced of AI's use over and over when it comes to coding. And whenever I talk about it, I see that I come across as an evangelist. Some people appreciate that, online I get a lot of push back despite having tangible examples of how it has been useful.

show 6 replies
Xeronatetoday at 10:50 PM

Is it true that it's bad for learning new skills? My gut tells me it's useful as long as I don't use it to cheat the learning process and I mainly use it for things like follow up questions.

icedchaitoday at 10:40 PM

My experience is the productivity gains are negative to neutral. Someone else basically wrote that the total "work" was simply being moved from one bucket to another. (I can't find the original link.)

Example: you might spend less time on initial development, but more time on code review and rework. That has been my personal experience.

xg15today at 10:59 PM

Not sure if that's also category #2 or a new one, but also: Places where AI is at risk of effectively becoming a drug and being actively harmful for the user: Virtual friends/spouses, delusion-confirming sycophants, etc.

WhyOhWhyQtoday at 9:16 PM

I also would like to see AI end up dying off except for a few niches, but I find myself using it more and more. It is not a productivity boost in the way I end up using it, interestingly. Actually I think it is actively harming my continued development, though that could just be me getting older, or perhaps massive anxiety from joblessness. Still, I can't help but ask it if everything I do is a good idea. Even in the SO era I would try to find a reference for every little choice I made to determine if it was a good or bad practice.

show 1 reply
mips_avatartoday at 9:13 PM

The thing that changed my view on LLMs was solo traveling for 6 months after leaving Microsoft. There were a lot of points on the trip where I was in a lot of trouble (severe food sickness, stolen items, missed flights) where I don't know how I would have solved those problems without chatGPT helping.

show 3 replies