logoalt Hacker News

My article on why AI is great (or terrible) or how to use it

158 pointsby akshaykayesterday at 6:17 PM222 commentsview on HN

Comments

bdauvergnetoday at 11:26 AM

Seems people read the blog but not the code, I looked at the stated rewrite of Numpy in Rust:

> As an introductory project, I rewrote Numpy in Rust. It was great fun.

That's not a rewrite at all it's just a wrapping of an existing linear algebra Rust library (faer, blas, etc..) with a more Numpy like API. It seems to me that every AI project I look at is just a mashup/wrapper over existing things. Where are the real bootstrapped new things with AI ? Is there any big OSS project (Linux kernel, postgresql, Django, whatever) with serious bugfixes or new features implemented by AI that we could look at ?

Are so much people in programming implementing middleware / wrapping existing API all day that it gives them a feeling of liberation to be able to delegate those tasks ?

show 6 replies
CharlesWyesterday at 7:58 PM

I get vibe-coders not having a good experience once the honeymoon is over. But I'm fascinated that a professional software developer could have such a different experience than I do.

    • LLMs generate junk
    • LLMs generate a lot of junk
show 5 replies
arachtoday at 6:01 AM

> My personal favorite hooks though are these:

  "Stop": [
  {
    "hooks": [
      {
        "type": "command",
        "command": "afplay -v 0.40 /System/Library/Sounds/Morse.aiff"
      }]}],
  "Notification": [
  {
    "hooks": [
      {
        "type": "command",
        "command": "afplay -v 0.35 /System/Library/Sounds/Ping.aiff"
      }]}]
These are nice but it's even nicer when Claude is talking when it needs your attention

Easy to implement -> can talk to ElevenLabs or OpenAI and it's a pretty delightful experience

show 1 reply
AdieuToLogictoday at 5:43 AM

The author presents a false dichotomy when discussing "Why Not AI".

  ... there are some serious costs and reasonable 
  reservations to AI development. Let's start by listing 
  those concerns

  These are super-valid concerns. They're also concerns that 
  I suspect came around when we developed compilers and 
  people stopped writing assembly by hand, instead trusting 
  programs like gcc ...
Compilers are deterministic, making their generated assembly code verifiable (for those compilers which produce assembly code). "AI", such as "Claude Code (or Cursor)" referenced in the article, is nondeterministic in their output and therefore incomparable to a program compiler.

One might as well equate the predictability of a Fibonacci sequence[0] to that of a PRNG[1] since both involve numbers.

0 - https://en.wikipedia.org/wiki/Fibonacci_sequence

1 - https://en.wikipedia.org/wiki/Pseudorandom_number_generator

show 4 replies
deergomooyesterday at 7:55 PM

I will never as long as I live understand the argument that AI development is more fun. If you want to argue that you’re more capable or whatever, fine. I disagree but I don’t have any data to disprove you.

But saying that AI development is more fun because you don’t have to “wrestle the computer” is, to me, the same as saying you’re really into painting but you’re not really into the brush aspect so you pay someone to paint what you describe. That’s not doing, it’s commissioning.

show 26 replies
somekyle2yesterday at 8:01 PM

I suspect that lots of developers who are sour on relying on AI significantly _would_ agree with most of this, but see the result of that logic leading to (as the article notes) "the skill of writing and reading code is obsolete, and it's our job to make software engineering increasingly entirely automated" and really don't like that outcome so they try to find a way to reject it.

"The skillset you've spend decades developing and expected to continue having a career selling? The parts of it that aren't high level product management and systems architecture are quickly becoming irrelevant, and it's your job to speed that process along" isn't an easy pill to swallow.

show 4 replies
aidosyesterday at 9:45 PM

The linked Claude generated script for giving more control over permissions in tool use is… typically Claude.

The code interleaves rules and control flow, drops side effects like “exit” in functions and hinges on a stack of regex for parsing bash.

This isn’t something I’ve attempted before but it looks like a library like bashlex would give you a much cleaner and safer starting point.

For a “throwaway” script like this maybe it’s fine, but this is typical of the sort of thing I’m seeing spurted out and I’m fascinated to see what people’s codebases look like these days.

Don’t get me wrong, I use CC every day, but man, you do need to fight it to get something clean and terse.

https://gist.github.com/mrocklin/30099bcc5d02a6e7df373b4c259...

falloutxyesterday at 8:21 PM

AI development for me is not fun. It may be faster and more productive, jury still out on that. But typing code and understanding each line has its advantages. AI also takes out a lot of creativity out of programming and climbing the abstractions isnt for everyone.

Do we want everyone to operate at PM level? The space for that is limited. Its easy to say you enjoy vibe coding when you are high up the chain but for most devs we are not as experienced or lucky to be able to feel stable when workflows change every day.

But I dont feel I have enough data to believe whether vibe coding or hand coding is better, I am personally doing tedious task with AI, and still writing code by hand all the time.

Also the author presents rewriting Numpy in rust as some achievement but the AIs most probably trained on Numpy and RustyNum, AI are best at copying the code so its not really a big thing.

noddinghamyesterday at 9:05 PM

None of these articles address how we'll go from novice to expert, as either self-taught or through the educational system, and all the bloggers got their proverbial "10k hours" before LLMs were a thing. IMO This isn't abstractions, the risk is wholesale outsourcing of learning. And no, I don't accept the argument that correct and LLMs errors is the same as correcting a junior devs errors because the junior dev would (presumably) learn and grow to become a senior. The technology doesn't exist for an LLM to do the same today and there's no viable path in that direction.

Can someone tell me what the current thinking is on how we'll get over that gap?

show 3 replies
rajangdavisyesterday at 8:28 PM

The more I use AI, the more I think about the book Fooled By Randomness.

AI can take you down a rabbit hole that makes you feel like you are being productive but the generated code can be a dead end because of how you framed the problem to the AI.

Engineers need enough discipline to understand the problems they are trying to solve before delegating a solution to a stochastic text generator.

I don’t always like using AI but have found it helpful in specific use cases such as speeding up CI test pipelines and writing spec; however, someone smarter than me/more familiar with the problem space may have better strategies that I cannot of think of, and I have been fooled by randomness.

show 1 reply
ctothyesterday at 8:25 PM

Re audio, I have been working on a nice little tool called Claudio[0] which adds sounds to Claude Code in a nice configurable sort of way. It's still pretty new but it's a lot better than directly hooking to avplay :)

[0]: https://claudio.click

mcintyre1994today at 9:34 AM

I disagree with them on the large PR section. I agree that we sometimes don’t review these in detail, but the thing that enables that for me is a trusted contributor explaining deterministically what they did. I renamed whatever variable, or I changed whatever script in all these places, I updated whatever lint rule and fixed all the errors, I moved whatever to a separate package.

All of these can create a diff across lots of files, but can be easily explained to a reviewer.

The problem is that I can’t trust an LLM to act deterministically the way I trust some people to. So I can’t just take that higher level view of “yep cool that sounds like a good idea” and skim the changes.

show 1 reply
mccoybyesterday at 8:22 PM

Who knew that these massive high-dimensional probability distributions would drive us insane

pdude444today at 7:17 AM

I think this take doesn’t take into account the slope of improvement we have seen from AI. Take Claude opus 4.5, I’ve seen dramatic improvements in the models ability to handle large context windows

linkregisteryesterday at 8:45 PM

Rather than spending iterations crafting precise permissions, why not just run with

    --dangerously-skip-permissions
If run in a devcontainer[1][2], the worst thing that can happen is it deletes everything in the filesystem below the mounted repo. Recovery would entail checking out the repo again.

1. (conventional usage) https://code.visualstudio.com/docs/devcontainers/containers

2. (actual spec) https://containers.dev/

show 3 replies
beej71today at 2:55 AM

>>I get it, you’re too good to vibe code. You’re a senior developer who has been doing this for 20 years and knows the system like the back of your hand.

>> [...]

>>No, you’re not too good to vibe code. In fact, you’re the only person who should be vibe coding.

All we have to do is produce more devs with 20 years of experience and we'll be set. :)

show 1 reply
journaltoday at 9:32 AM

the time spent reading threads like this is better spent on buying lottery.

joshribakoffyesterday at 8:20 PM

I clicked out of the article since it starts out with a contradiction.

Experienced engineers can successfully vibe code? By definition it means not reading the output.

If you’re not reading your output, then why does skill level even matter?

show 3 replies
spankibaltyesterday at 8:59 PM

> "I will never as long as I live understand the argument that AI development is more fun."

What I always find amusing are the false equivalences, where one or several (creative) processes involving the hard work that is a fundamental part of the craft get substituted by push-to-"I did this!1!!" slop.

How's the saying go? "I hate doing thing x. The only thing I hate more is not doing thing x". One either owns that, or one doesn't. So that is indeed not mysterious. Especially not in a system where "Fake it till you make it" has been and is advertised as a virtue.

mannanjtoday at 9:47 AM

Weird question. If a system you are using is intended to extract from you, and unwillfully and nonconsensually stealing your intellectual property because the unaccountable big companies do shady stuff (the "AI" companies) is it justified to use it for the productivity gains because it will 'eventually' get there anyway?

Does the potential gain as an early adopter make it morally ok.

Because thats what these tiered uses of these AI and how it's been getting better works imo. It got lots of training data from juniors and seniors using it the last 2 years, it got better. It gets more appealing and leverages human psychology and marketing to get higher level engineers to train it as it and the companies extract more data. It needs and gets more data from the people willfully complying and using it. Wondering if theres a game theory design for this conundrum - what typically happens in nature in these scenarios?

jaredcwhitetoday at 12:16 AM

I'm so completely over these types of articles. Just as the AI techbros want to convince people that "the genie is out of the bottle" and that these services & practices are inevitable, it is also the case the the cohort of people who explicitly eschew using genAI is significant and growing. Nobody is being convinced reading this…like "wow, I vowed never to use genAI as a software developer, and then suddenly I read this article and now I've seen the light!"

opponent4today at 6:10 AM

> That being said, there are some serious costs and reasonable reservations to AI development.

Neither this nor the discussion here so far mentions ethics. It should.

According to latest reports AI now consumes more water than the global bottled water industry. These datacenters strain our grids and where needs can't be met they employ some of the least efficient ways to generate electricity generating tons of pollution. The pollution and the water problems are hitting poorer communities as the more affluent ones can afford much better legal pushback.

Next, alas, we can't avoid politics. The shadow that Peter Thiel and a16z (who named one of the two authors of the Fascist Manifesto their patron saints) casts over these tools is very long. These LLMs are used as a grand excuse to fire a lot of people and also to manufacture fascist propaganda on a scale you have never seen before. Whether these were goals when Thiel & gang financed them or not, it is undeniable they are now indispensable in helping the rise of fascism in the United States. Even if you were to say "but I am using code only LLMs" you are still stuffing the pockets of these oligarchs.

The harm these systems cause is vast and varied. We have seen them furthering suicidal ideation in children and instructing them on executing these thoughts. We have seen them generating non-consensual deepfakes at scale including those of children.

alfalfasproutyesterday at 8:00 PM

For a senior engineer, some very odd takes here:

"Our ability to zoom in and implement code is now obsolete Even with SOTA LLMs like Opus 4.5 this is downright untrue. Many, many logical, strategic, architectural, and low level code mistakes are still happening. And given context window limitations of LLMs (even with hacks like subagents to work around this) big picture long-term thinking about code design, structure, extensibility, etc. is very tricky to do right."

If you can't see this, I have to seriously question your competence as an engineer in the first place tbh.

"We already do this today with human-written code. I review some code very closely, and other code less-so. Sometimes I rely on a combination of tests, familiarity of a well-known author, and a quick glance at the code to before saying "sure, seems fine" and pressing the green button. I might also ask 'Have you thought of X' and see what they say.

Trusting code without reading all of it isn't new, we're just now in a state where we need to review 10x more code, and so we need to get much better at establishing confidence that something works without paying human attention all the time.

We can augment our ability to write code with AI. We can augment our ability to review code with AI too."

Later he goes onto suggest that confidence is built via TDD. Problem is... if the AI is generating both code and tests, I've seen time and time again both in internal projects and OSS projects how major assumptions are incorrect, mistakes compound, etc.

show 3 replies
dmezzettiyesterday at 8:27 PM

AI Development is good for those who want to do it. But not a terminal career decision for those who don't.

bossyTeacheryesterday at 10:46 PM

"AI development is more fun. I do more of what I like (think, experiment, write) and less of what I don't like (wrestle with computers).

I feel both that I can move faster and operate in areas that were previously inaccessible to me (like frontend). Experienced developers should all be doing this. We're good enough to avoid AI Slop, and there's so much we can accomplish today."

If frontend was "inacessible" and AI makes it "accessible", I would argue that you don't really know frontend and should probably not be doing it professionally with AI. Use AI, yes but learn frontend without AI first. And his "Experienced developers should all be doing this" is ridiculous. He should be honest and confess that he doesn't like programming. He probably enjoys systems design or some sort of role involving product design that does not involve programming. But none of these people are "developers".

NoraCodesyesterday at 8:13 PM

I don't care that AI development is more fun for the author. I wouldn't care if all the evidence pointed toward AI development being easier, faster, and less perilous. The externalities, at present, are unacceptable. We are restructuring our society in a way that makes individuals even less free and a few large companies even more powerful and wealthy, just to save time writing code, and I don't understand why people think that's okay.

show 3 replies
johnwheelertoday at 4:37 AM

AIs don't generate junk. Engineers with little experience _think_ they generate junk. Or engineers on that bandwagon, which in my opinion is driven by denial or naivety

If you know what the fuck you're doing, they're incredible. Scary so.

show 1 reply
llmslave2yesterday at 8:22 PM

I thought the article was going to be about AI zealotry but it was just AI zealotry.

show 1 reply