logoalt Hacker News

stiivtoday at 2:00 PM20 repliesview on HN

> If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours.

Broadly speaking, I think this is a wise assessment. There are opportunities for productivity gains right now, but it I don't think it's a knockout for anyone using the tech, and I think that onboarding might be challenging for some people in the tech's current state.

It is safe to assume that the tech will continue to improve in both ways: productivity gains will increase, onboarding will get easier. I think it will also become easier to choose a particular suite of products to use too. Waiting is not a bad idea.


Replies

augusto-mouratoday at 2:29 PM

What I get a bit annoyed is companies forcing AI tools, getting usage metrics and actively hunting the engineers that don't use the tool "enough", I've never seen anything like it for a technically optional tool. Even in the past, aside from technical limitaions, you were not required to use enough of a tool.

It just sounds like a giant scheme to burn through tokens and give money to the AI corps, and tech directors are falling for it immediately.

show 15 replies
nemomarxtoday at 2:06 PM

It also seems like skills with particular tech (prompt engineering, harnesses, mixture of experts set ups) doesn't always necessarily pay off when there's a sea change. Hard to predict what you'll want in a few years anyway, right?

show 6 replies
vablingstoday at 3:04 PM

There really isn't anything special to using AI anyways it's not rocket science. Sometimes I will use AI to write me some tailwind tags, sometimes I will use AI to write me a static site for a custom report.

Most of my AI usage comes from doing things I don't enjoy doing like making a series of small tweaks to a function or block of code. Honestly, I just levelled the playing field with vim users and its nothing to write home about

II2IItoday at 2:38 PM

I almost entirely agree with the author's assessment of new technology. Yet that statement rubbed me the wrong way.

Sometimes it is better to get into things early because it will grow more complex as time goes on, so it will be easier to pick up early in its development. Consider the Web. In the early days, it was just HTML. That was easy to learn. From there on, it was simply a matter of picking up new skills as the environment changed. I'm not sure how I would deal with picking up web development if I started today.

show 5 replies
garyfirestormtoday at 3:11 PM

Counter point. It’s always advantageous to learn and grow as things evolve. This way you have an active role and maybe a say in how it will evolve. And maybe you could contribute towards that evolution (despite poor execution openclaw showed what LLMs could be doing)

> There are a 16,000 new lives being born every hour. They're all starting with a fairly blank slate.

Not long ago we were ridiculing genZ for not knowing why save icon looks like a floppy disk.

Do you want to feel like that in next 5-10 years?

show 2 replies
gradus_adtoday at 2:39 PM

But it's so easy to try something like Claude Code. It's not like you need to get up to speed. There is no learning curve*, that's the nature of AI. Just start using it and you'll see why it has attracted so much hype.

*I should qualify that "using" CC in the strict sense has no learning curve, but really getting the most out of it may take some time as you see its limitations. But it's not learning tech in the traditional sense.

show 6 replies
isk517today at 5:58 PM

I've let tech pass me by many times, then the tech that passed me which I was never in a position to use got replaced by the next big tech innovation. I've found that you can climb aboard the train at anytime since everything new is a lot easier to get started on than learning C and having to manually allocate memory.

ozimtoday at 5:00 PM

I think it was challenging 2 or 3 years ago. I plunged a year ago and it already was quite easy to use mainstream tools. I could run some local models with Ollama by just installing it. I could use coding assistance in VSCode. Connecting with http API to use AI within applications you build was also easy for local models or cloud.

There are loads of BS tools out there of course but I don’t use that many tools.

abustamamtoday at 4:42 PM

Broadly speaking I agree. But the reality for many SWEs is that if they don't learn new AI tools they'll get let go. It's use AI or be replaced by AI (or, more accurately, be replaced by someone using Ai) for many folks.

I think it's a luxury to be able to ignore a trend like AI. crypto was fine to ignore because it didn't really replace anyone, but Ai is a different beast

show 1 reply
randusernametoday at 2:57 PM

Counterpoint:

Mistakes are less costly in the beginning and the knowledge gained from them is more valuable.

Over-sharing on social media. Secret / IP leaks with LLMs. That kind of thing.

I agree:

FOMO is an all-in mindset. Author admits to dabbling out of curiosity and realizing the time is not right for him personally. I think that's a strong call.

imtringuedtoday at 2:49 PM

I think this is particularly evident with AI.

The early adopters started years ago and they've seen improvements over time that they started attributing them to their own skill. They tell you that if you didn't spend years prompting the AI, it will be difficult to catch up.

However, the exact opposite is happening. As the models get better, the need for the perfect prompt starts waning. Prompt engineering is a skill that is obsoleting faster than handwriting code.

I personally started using codex in march and honestly, the hardest part was finding and setting up the sandbox. (I use limactl with qemu and kvm). Meanwhile the agentic coding part just works.

brandonmenctoday at 2:54 PM

This is true in my experience.

I waited until it seemed good enough to use without having to spend most of my time keeping up with the latest magical incantations.

Now I have multiple Claude instances running and producing almost all of my commits at work.

Yes, with a lot of time spent planning and validating.

spwa4today at 6:59 PM

This is the central thing that changes in a person with age. When you are born, the only thing you do is pick up new things. Literally nothing else. When you're young, picking up new things is how you improve your social position. It's what you do to even be talked to in the first place. It's what you do to get a girl/boyfriend, or be the best student in class, or to be the best (or worst even) employee at your first job ...

Once you have a good social position, or at least one you're happy with, you stop doing this, and you grow ever more irritated at others doing it ... because it's your social position that they're coming after. And they're younger, more motivated and hungrier. More than that, a decent chunk of these people want a better social position, even if that means taking yours.

show 1 reply
theptiptoday at 3:04 PM

Ok, here is the risk of being left behind - if we have moderately fast take-off, the 1-2 years required to upskill in AI might mean you find yourself unemployable when your role gets axed.

I don’t think folks are taking seriously the possible worlds at the P(0.25) tail of likelihood.

You do not get to pick up this stuff “on a timescale of my choosing”, in the worlds where the capability exponential keeps going for another 5-10 years.

I’m sure the author simply doesn’t buy that premise, but IMO it’s poor epistemics to refuse to even engage with the very obvious open question of why this time might be different.

show 4 replies
postalcodertoday at 2:28 PM

The thing is, this post is hitting a straw man. ngmi culture was deeply toxic and pervasive in crypto. I think the people who are really into LLMs are having a blast.

show 1 reply
agentultratoday at 4:17 PM

One area where it may end up leaving you behind is if you’re looking for a job right now. There are a lot of companies putting vibe coding in their job requirements. The more companies that do this the harder it will be to find employment if you’re not adopting this tool/workflow.

logicchainstoday at 4:19 PM

Even if it reaches the end state of AGI, e.g. AI that's smarter and more capable than 90% of humans, there'll still be a huge learning curve to using it well, as anyone who's tried managing very smart humans can attest.

wslhtoday at 3:00 PM

We've seen multiple ideas/products get quickly absorbed into frontier models, OSS, or well-funded startups. The cycle from "interesting idea" to "commoditized feature" is getting very short. Personally, there were three of these in the last year.

And even if your product is genuinely great, distribution is becoming the real bottleneck. Discovery via prompting or search is limited, and paid acquisition is increasingly expensive.

One alternative is to loop between build and kill, letting usage emerge organically rather than trying to force distribution.

fantasizrtoday at 2:28 PM

somehow the ai bros are saying creating .md files is the real ingenuity, and couldn't be learned in say half a day. There's absolutely no rush to keep up with the latest code producing tools especially when they're all "pay to play".

casey2today at 7:06 PM

No. It just assumes there is no utility in the underlying tech, someone who believes vaccines don't work could make the same argument. Most people trust Morgan-Stanley when it comes to financial instruments more than some bozo on the internet.

You do have to drag stubborn people, kicking and screaming, into the future or they will continue using old tech. The article is framed in the past tense, "someone tried", "the crypto grift was". As if it's not currently swallowing the world. I guess he is so maximally sensible that he self-assess faster than MS and realizes bitcoin just isn't for him every time.

He has a strange hyper-specific definition of utility and productivity, (wrote my MSc, had fun) don't count.