logoalt Hacker News

Don't fall into the anti-AI hype

544 pointsby todsacerdotiyesterday at 10:26 AM722 commentsview on HN

Comments

embedding-shapeyesterday at 11:27 AM

> But what was the fire inside you, when you coded till night to see your project working? It was building.

I feel like this is not the same for everyone. For some people, the "fire" is literally about "I control a computer", for others "I'm solving a problem for others", and yet for others "I made something that made others smile/cry/feel emotions" and so on.

I think there is a section of programmer who actually do like the actual typing of letters, numbers and special characters into a computer, and for them, I understand LLMs remove the fun part. For me, I initially got into programming because I wanted to ruin other people's websites, then I figured out I needed to know how to build websites first, then I found it more fun to create and share what I've done with others, and they tell me what they think of it. That's my "fire". But I've met so many people who doesn't care an iota about sharing what they built with others, it matters nothing to them.

I guess the conclusion is, not all programmers program for the same reason, for some of us, LLMs helps a lot, and makes things even more fun. For others, LLMs remove the core part of what makes programming fun for them. Hence we get this constant back and forth of "Can't believe others can work like this!" vs "I can't believe others aren't working like this!", but both sides seems to completely miss the other side.

show 19 replies
silcoonyesterday at 11:12 PM

I perfectly agree with antirez about the importance of AI and the benefit for coders. In the last month we saw a big jump and we all are in the middle of the biggest technological revolution since the internet. He summarised the benefits, but omitted the rest.

Why we don't have to be anti-AI? Why in his opinion is just "HYPE"? I didn't find any answer in his post. He doesn't analyse the cons of AI and explain why some people might be anti-AI. He skipped the hard part and wrote a mild article that re-publish the narrative that is already getting spread on every social media.

Edit for clarification: I don't consider anti-AI the people that think LLMs don't work, they are wrong. I consider anti-AI people that are worried how this technology will impact society in so many ways that are hard to predict, including the future of software engineering.

show 2 replies
trinsic2today at 12:42 AM

If it wasn't for companies gate keeping, buying up all the compute, putting an huge load on our infrastructure, people and government using it to surveil its people I would be more supportive of it. But right now, you got to be insane to be supporting this technology. Its literally being used to do more harm than good. I don't see any end to this. I cannot and will not support a surveillance state in the name of progress.

systemf_omegayesterday at 11:05 AM

What I don't understand about this whole "get on board the AI train or get left behind" narrative, what advantage does an early adopter have for AI tools?

The way I see it, I can just start using AI once they get good enough for my type of work. Until then I'm continuing to learn instead of letting my brain atrophy.

show 13 replies
cmiles8yesterday at 11:03 AM

The “anti-AU hype” phrase oversimplifies what’s playing out at the moment. On the tech side, while things are a bit rough around the edges still the tech is very useful and isn’t going away. I honestly don’t see much disagreement there.

The concern mostly comes from the business side… that for all the usefulness on the tech there is no clearly viable path that financially supports everything that’s going on. It’s a nice set of useful features but without products with sufficient revenue flowing in to pay for it all.

That paints a picture of the tech sticking around but a general implosion of the startups and business models betting on making all this work.

The later isn’t really “anti-AI hype” but more folks just calling out the reality that there’s not a lot of evidence and data to support the amount of money invested and committed. And if you’ve been around the tech and business scene a while you’ve seen that movie before and know what comes next.

In 5 years time I expect to be using AI more than I do now. I also expect most of the AI companies and startups won’t exist anymore.

show 3 replies
edg5000yesterday at 11:28 AM

> state of the art LLMs are able to complete large subtasks or medium size projects alone, almost unassisted, given a good set of hints about what the end result should be

No. I agree with the author, but it's hyperbolic of him to phrase it like this. If you have solid domain knowledge, you'll steer the model with detailed specs. It will carry those out competently and multiply your productivity. However, the quality of the output still reflects your state of knowledge. It just provides leverage. Given the best tractors, a good farmer will have much better yields than a shit one. Without good direction, even Opus 4.5 tends to create massive code repetion. Easy to avoid if you know what you are doing, albeit in a refactor pass.

show 3 replies
burgeroneyesterday at 10:36 PM

There's this infinite war between the two opposing sides. "It's going to change programming forever" vs "Why not just use your brain". I much prefer option two for all the good reasons. Saying that AI is awesome doesn't actually adress all its issues.

show 3 replies
adityaathalyeyesterday at 11:10 AM

Don't fall into the "Look ma, no hands" hype.

Antirez + LLM + CFO = Billion Dollar Redis company, quite plausibly.

/However/ ...

As for the delta provided by an LLM to Antirez, outside of Redis (and outside of any problem space he is already intimately familiar with), an Apples to Apples comparison would be he trying this on an equally complex codebase he has no idea about. I'll bet... what Antirez can do with Redis and LLMs (certainly useful, huge Quality of Life improvement to Antirez), he cannot even begin to do with (say) Postgres.

The only way to get there with (say) Postgres, would be to /know/ Postgres. And pretty much everyone, no matter how good, cannot get there with code-reading alone. With software at least, we need to develop a mental model of the thing by futzing about with the thing in deeply meaningful ways.

And most of us day-job grunts are in the latter spot... working in some grimy legacy multi-hundred-thousand line code-mine, full of NPM vulns, schelpping code over the wall to QA (assuming there is even a QA), and basically developing against live customers --- "learn by shipping", as they say.

I do think LLMs are wildly interesting technology, however they are poor utility for non-domain-experts. If organisations want to profit from the fully-loaded cost of LLM technology, they better also invest heavily in staff training and development.

show 6 replies
gradus_adyesterday at 11:32 PM

It seems that as the tools available to developers have become more abstracted allowing them to do more with less, their ability to command higher salaries and prestige has only grown and grown. LLM's are just a continuation of this trend.

The naive view considers only the small scale ease of completing a task in isolation and expects compensation to be proportional to it. But that's not how things work. Yes abstraction makes individual tasks easier to complete, but with the extra time available more can be done, and as more is done and can be done, new complexities emerge. And as an individual can do more, the importance of trust grows as well. This is why CEO's make disproportionately more than their employees, because while the complexity of their work may scale only linearly with their position, or not at all even beyond a certain point, the impact of their decisions grows exponentially.

LLM's are just going to enhance the power and influence of software developers.

show 1 reply
chrzyesterday at 11:10 AM

> How do I feel, about all the code I wrote that was ingested by LLMs? I feel great to be part of that, because I see this as a continuation of what I tried to do all my life: democratizing code, systems, knowledge. LLMs are going to help us to write better software, faster, and will allow small teams to have a chance to compete with bigger companies.

You might feel great, thats fine, but I dont. And software quality is going down, I wouldn't agree that LLMs will help write better software

NitpickLawyeryesterday at 11:13 AM

> Whatever you believe about what the Right Thing should be, you can't control it by refusing what is happening right now. Skipping AI is not going to help you or your career. Think about it. Test these new tools, with care, with weeks of work, not in a five minutes test where you can just reinforce your own beliefs.

This is the advice I've been giving my friends and coworkers as well for a while now. Forget the hype, just take time to test them from time to time. See where it's at. And "prepare" for what's to come, as best you can.

Another thing to consider. If you casually look into it by just reading about it, be aware that almost everything you read in "mainstream" places has been wrong in 2025. The people covering this, writing about this, producing content on this have different goals in this era. They need hits, likes, shares and reach. They don't get that with accurate reporting. And, sadly, negativity sells. It is what it is.

THe only way to get an accurate picture is to try them yourself. The earlier you do that, the better you'll be. And a note on signals: right now, a "positive" signal is more valuable for you than many "negative" ones. Read those and try to understand the what, if not the how. "I did this with cc" is much more valuable today than "x still doesn't do y reliably".

show 1 reply
bluGillyesterday at 11:05 AM

I'm trying not to fall for it, but when I try ai to write code it fails more often than not - at least for me. some people claim it does everything but I keep finding major problems. Even when it writes something that works often I can't explain that in 2026 we should be using smart pointers (C++) or what ever the modern thing

show 1 reply
phtrivieryesterday at 11:10 AM

How would we measure the effects of AI coding tool taking over manual coding ? Would we see an increase in the number of GitHub projects ? In the number of stars (given the ai is so good) ? In the number of start up ipos (surely if all your engineers are 1000x engineers thanks to Claude code, we'll have plenty of googles and Amazons to invest in) ? In the price of software (if I can just vibe code everything, than a 10$ fully compatible replacement for MS Windows is just a few months away, right ?) In the the numbers of app published in the stores ?

show 5 replies
kiriakosvyesterday at 11:51 AM

AI tools in their current form or another will definitely change software engineering, I personally think for the best

However I can’t help but notice some things that look weird/amusing:

- The exact time that many programmers were enlightened about the AI capabilities and the frequency of their posts.

- The uniform language they use in these posts. Grandiose adjectives, standard phrases like ‘it seems to me’

- And more importantly the sense of urgency and FOMO they emit. This is particularly weird for two reasons. First is that if the past has shown something regarding technology is that open source always catches up. But this is not the case yet. Second, if the premise is that we re just the in beginning all these ceremonial flows will be obsolete.

Do not get me wrong, as of today these are all valid ways to work with AI and in many domains they increase the productivity. But I really don’t get the sense of urgency.

ironman1478yesterday at 5:03 PM

I'm not sure what to make of these technologies. I read about people doing all these things with them and it sounds impressive. Then when I use it, it feels like the tool produces junior level code unless I babysit it, then it really can produce what I want.

If I have to do all this babysitting, is it really saving me anything other than typing the code? It hasn't felt like it yet and if anything it's scary because I need to always read the code to make sure it's valid, and reading code is harder than writing it.

keyleyesterday at 12:35 PM

> The fun is still there, untouched.

Well that's a way to put it. But not everyone enjoy the art only for the results.

I personally love learning, and by letting AI drive forward and me following, I don't learn. To learn is to be human.

So saying the fun is untouched is one-sided. Not everyone is in it for the same reasons.

wasmainiacyesterday at 11:54 AM

These personal blogs are starting to feel like Linkdin Lunatic posts, kinda similar. to the optimised floor sweeping blog, “I am excited to provide shareholder value, at minimum wage”

show 1 reply
golly_nedyesterday at 5:07 PM

As long as I'm not reviewing PRs with thousands of lines net new that weren't even read by their PR submitter, I'm fine with anything. The software design I've seen from AI code agent using peers has been dreadful.

I think for some who are excited about AI programming, they're happy they can build a lot more things. I think for others, they're excited they can build the same amount of things, but with a lot less thinking. The agent and their code reviewers can do the thinking for them.

zhyderyesterday at 11:42 PM

Sounds like antirez, simonw, et al are still advocating reviewing the code output of these agents for now. But presumably soon (within months?) the agents will be good enough such that line-by-line review will no longer be necessary, or humanly possible as we crank the agents up to 11.

But then how will we review each PR enough to have confidence in it?

How will we understand the overall codebase too after it gets much bigger?

Are there any better tools here other than just asking LLMs to summarize code, or flag risky code... any good "code reader" tools (like code editors but focused on this reading task)?

show 1 reply
agoodusername63yesterday at 3:52 PM

I never stop being amused that LLMs have made HN realize that many programmers are programmers for paychecks. Not for passion

bob1029yesterday at 11:20 AM

> Test these new tools, with care, with weeks of work, not in a five minutes test where you can just reinforce your own beliefs. Find a way to multiply yourself, and if it does not work for you, try again every few months.

I've been taking a proper whack at the tree every 6 months or so. This time it seems like it might actually fall over. Every prior attempt I could barely justify spending $10-20 in API credits before it was obvious I was wasting my time. I spent $80 on tokens last night and I'm still not convinced it won't work.

Whether or not AI is morally acceptable is a debate I wish I had the luxury of engaging in. I don't think rejecting it would allow me to serve any good other than in my own mind. It's really easy to have certain views when you can afford to. Most of us don't have the privilege of rejecting the potential that this technology affords. We can complain about it but it won't change what our employers decide to do.

Walk the game theory for 5 minutes. This is a game of musical chairs. We really wish it isn't. But it is. And we need to consider the implications of that. It might be better to join the "bad guys" if you actually want to help those around you. Perhaps even become the worst bad guy and beat the rest of them to a functional Death Star. Being unemployed is not a great position to be in if you wish to assist your allies. Big picture, you could fight AI downstream by capitalizing on it near term. No one is keeping score. You might be in your own head, but you are allowed to change that whenever you want.

show 2 replies
remix2000yesterday at 10:34 PM

Honestly, coding with a chatbot's "help" just slows me down. Also the progress in chatbot space is minimal (at least it feels like that from an end user perspective), essentially nonexistent since like 2024. I only use them cause all search engines are broken on purpose now. It's truly terrible times we live in, but not because the robots could replace us, rather because nontechnical managers are detached from reality as they always were and want us to believe that.

show 1 reply
lrvickyesterday at 10:43 PM

As a security engineer that regularly architects and helps implement new defense tactics that no LLM has trained on, I choose not to use LLMs at all, like a cave man.

Being differently trained and using different tools than almost everyone else I know in engineering my entire career has allowed me to find solutions and vulnerabilities others have missed time and time again. I exclusively use open source software I can always take apart, fully understand, and modify as I like. This inclination has served me well and is why I have the skillsets I do today.

If everyone is doing things one way, I instinctively want to explore all the other ways to train my own brain to continue to be adversarial and with a stamina to do hard experiments by hand when no tools exist to automate them yet.

Watching all my peers think more and more alike actually scares me, as they are all talking to the same LLMs. None for me, thanks.

"But this magic proprietary tool makes my job so much easier!!" has never been a compelling argument for me.

Cold_Miserabletoday at 12:24 AM

Its not hype. There's no such thing as AI. Matrix multiplication isn't intelligence.

28ahsgT7yesterday at 7:29 PM

It is somewhat amusing that the pro-LLM faction increasingly co-opts their opponents' arguments—now they are turning AI-hype into anti-AI hype.

They did the same with Upton Sinclair's quote, which is now used against any worker who dares to hope for salary.

There is not much creativity in the pro-LLM faction, which is guided by monetary interests and does not mind to burn its social capital in exchange for loss of credibility and money.

aussieguy1234today at 12:46 AM

AI is going to put a hold on the development of new programming languages for sure, since they won't be in the training set.

Great news if you know the current generation of languages, you won't need to learn a new one for quite some time.

show 1 reply
kace91yesterday at 11:19 AM

>Yes, maybe you think that you worked so hard to learn coding, and now machines are doing it for you. But what was the fire inside you, when you coded till night to see your project working? It was building. And now you can build more and better, if you find your way to use AI effectively. The fun is still there, untouched.

I wonder if I’m the odd one out or if this is a common sentiment: I don’t give a shit about building, frankly.

I like programming as a puzzle and the ability to understand a complex system. “Look at all the things I created in a weekend” sounds to me like “look at all the weight I moved by bringing a forklift to the gym!”. Even ignoring the part that there is barely a “you” in this success, there is not really any interest at all for me in the output itself.

This point is completely orthogonal to the fact that we still need to get paid to live, and in that regard I'll do what pays the bills, but I’m surprised by the amount of programmers that are completely happy with doing away with the programming part.

show 2 replies
robot-wrangleryesterday at 11:23 AM

Let's maybe avoid all the hype, whether it is for or against, and just have thoughtful and measured stances on things? Fairly high points for that on this piece, despite the title. It has the obligatory remark that manually writing code is pointless now but also the obligatory caveat that it depends on the kind of code you're writing.

eeixlkyesterday at 11:33 AM

If you dont call it AI and see it as a natural language search engine result merger it's a bit easier to understand. Like a search engine, it's clunky so you have to know how to use it to get any useful results. Sometimes it appears magical or clever but it's just analyzing billions of text patterns. You can use this search merger to generate text in various forms quickly, and request new generated text. But it doesn't have taste, comprehension, problem solving, vision, or wisdom. However it can steal your data and your work and include it in it's search engine.

glouwbugtoday at 12:20 AM

AI works for Antirez because he's already a master of his domain

xg15yesterday at 8:06 PM

> However, this technology is far too important to be in the hands of a few companies.

I worry less about the model access and more about the hardwire required to run those models (i.e. do inference).

If a) the only way to compete in software development in the future is to outsource the entire implementation process to one of a few frontier models (Chinese, US or otherwise)

and b) only a few companies worldwide have the GPU power to run inference with those models in a reasonable time

then don't we already have a massive amount of centralization?

That is also something I keep wondering with agentic coding - being able to realize your epic fantasy hobby project you've on and off been thinking about for the last years in a couple of afternoons is absolutely amazing. But if you do the same with work projects, how do you solve the data protection issues? Will we all now just hand our entire production codebases to OpenAI or Anthropic etc and hope their pinky promises hold?

Or will there be a race for medium-sized companies to have their own GPU datacentets, not for production but solely for internal development and code generation?

darepublicyesterday at 10:30 PM

In my current work project I am consulting llm frequently as a type of coding search engine. I also use it to rubber duck my designs. Most of the coding was done myself though. But even that feels perhaps quaint and I feel like it may be wasting time

hollowturtleyesterday at 11:04 PM

> How do I feel, about all the code I wrote that was ingested by LLMs? I feel great to be part of that, because I see this as a continuation of what I tried to do all my life: democratizing code, systems, knowledge. LLMs are going to help us to write better software, faster, and will allow small teams to have a chance to compete with bigger companies.

Every now and then I post the same exact comment here on HN, where the heck are the products then? Or where is the better outcome? The faster software? Let alone small team competing with bigger companies?

We are NOT anti AI we're exhausted to keep reading bs from ai astroturfers or wanna be ai tech influencers. It's so exhausting it's always your fault that you're not "using the tool properly", and you're going to be left behind. I'm not anti AI I just wish the bubble will pop so instead of fighting back bs from managers that "I read that on HN" I can go back coding with and without ai where applies to my needs

namesbcyesterday at 11:22 PM

Vibe coders are so insistence that the rest of us adopt their shitting tooling because they need other people coding slop too to justify their lack of effort.

If programmers keep up good coding practices then the vibe coders are the ones left behind

etamponiyesterday at 10:27 PM

> Hours instead of weeks.

And then goes on describing two things for which I bet almost anyone with enough knowledge of C and Redis could implement a POC in... Guess what? Hours.

At this point I am literally speechless, if even Antirez falls for this "you get so quick!!!" hype.

You get _some_ speed up _for things you could anyway implement_. You get past the "blank screen block" which prevents you from starting some project.

These are great useful things that AI does for you!

Shaving off _weeks_ of work? Let's come back in a couple of month when he'll have to rewrite everything that AI has written so well. Or, that code would just die away (which is another great use case for AI: throw away code).

People still don't understand that writing code is a way to understand something? Clearly you don't need to write code for a domain you already understand, or that you literally created.

What leaves me sad is that this time it is _Antirez_ that writes such things.

I have to be honest: it makes me doubt of my position, and I'll constantly reevaluate it. But man. I hope it's just a hype post for an AI product he'll release tomorrow.

dbacaryesterday at 8:04 PM

There are different opinions on this:

https://spectrum.ieee.org/ai-coding-degrades

lifetimerubyisttoday at 12:31 AM

UBI will never happen because the people in power done want it.

Who is going to control AI? The people in power obviously. The will buy all of the computers so running models locally will no longer be feasible. In case it hasn’t been obvious that this is already happening. It will only get worse.

They will not let themselves be taxed.

But who will buy the things the people in power produce if nobody has a job?

This is how civilization collapses.

nephihahayesterday at 10:29 PM

Universal Basic Income is not the panacea it's claimed to be.

UBI gives government more control over individuals' finances, especially those without independent means. Poverty is also the result of unfair taxation, where poor people face onerous taxes while receiving less and less in return, and the wealthy avoid tax at every turn. Or that it is difficult for people to be self-employed due to red tape favouring big business. UBI does not address those issues.

UBI also centralises control at the expense of local self-determination and community engagement.

show 2 replies
yndoendoyesterday at 4:31 PM

I want to know if any content has been made using AI or not.

There really should be a label on the product to let the consumer know. This should be similar to Norway that requires disclosure of retouched images. No other way can I think of to help body image issues arising from pictorial people and how they never can being in real life.

ChrisMarshallNYyesterday at 11:37 AM

I generally have a lot of respect for this guy. He’s an excellent coder, and really cares about his craft. I can relate to him (except he’s been more successful than me, which is fine -he deserves it).

Really, one of the first things he said, sums it up:

> facts are facts, and AI is going to change programming forever.

I have been using it in a very similar manner to how he describes his workflow, and it’s already greatly improved my velocity and quality.

I also can relate to this comment:

> I feel great to be part of that, because I see this as a continuation of what I tried to do all my life: democratizing code, systems, knowledge.

theturtletalksyesterday at 11:37 AM

LLMs are breaking open-source monetization.

Group 1 is untouched since they were writing code for the sake of writing and they have the reward of that altruism.

Group 2 are those that needed their projects to bring in some revenue so they can continue writing open-source.

Group 3 are companies that used open-source as a way to get market share from proprietary companies, using it more in a capitalistic way.

Overtime, I think groups 2 and 3 will leave open-source and group 1 will make up most of the open-source contributors. It is up to you to decide if projects like Redis would be built today with the monetary incentives gone.

show 1 reply
sreekanth850yesterday at 11:50 AM

People here generalise vibcoders into single category. I don’t write code line-by-line the traditional way, but I do understand architecture deeply. Recently I started using AI to write code. not by dumping random prompts and copy-pasting blindly, but inside VS Code, reviewing what it generates, understanding why it works, and knowing exactly where each feature lives and how it fits. I also work with a frontend developer (As i do backend only and not interested in building UI and css) to integrate things properly, and together we fix bugs and security issues. Every feature built with AI works flawlessly because it’s still being reviewed, tested, and owned by humans. If I have a good Idea, and use AI to code, without depending on a developer friction due to limited budget, why people think its Sin? Is the implication that if you don’t have VC money to hire a team of developers, you’re supposed to just lose? I saw the exact same sentiment when tools like Elementor started getting popular among business owners. Same arguments, same gatekeeping. The market didn’t care. It feels more like insecurity about losing an edge. And if the edge was I type code myself, that edge was always fragile. Edit: The biggest advantage is that you don’t lose anything in translation. There’s no gap between the idea in your head and what gets built.

You don’t spend weeks explaining intent, edge cases, or what I really meant to a developer. You iterate 1:1 with the system and adjust immediately when something feels off.

coldteayesterday at 11:20 PM

>Yes, maybe you think that you worked so hard to learn coding, and now machines are doing it for you. But what was the fire inside you, when you coded till night to see your project working? It was building.

Nope. It was coding. Enjoying the process itself.

If I wanted to hand out specs and review code (which is what an AI jockey does), I'd be having fucking project managers as role models, not coders...

falloutxyesterday at 11:34 AM

Where is this Anti-AI hype? We are seeing 100x videos of Claude Code & Vibe Coding and then may be we get 1 or 2 people saying "Maybe we should be cautious"

show 2 replies
zkmonyesterday at 12:01 PM

So, by "AI", you mean programming AI. Generalizing it as "AI" and "anti-AI" is adding great confusion to the already dizzying level of hype.

At it's core, AI has capability to extract structure/meaning from unstructured content and vice-versa. Computing systems and other machines required inputs with limited context. So far, it was a human's job to prepare that structure and context and provide it to the machines. That structure can be called as "program" or "form data" or "a sequence of steps or lever operations or button presses".

Now the machines got this AI wrapper or adapter that enables them to extract the context and structure from the natural human-formatted or messy content.

But all that works only if the input has the required amount of information and inherent structure to it. Try giving a prompt with jumbled up sequence of words. So it's still the human jobs to provide that input to the machine.

didipyesterday at 6:29 PM

The paragraph that was started with this sentence:

> However, this technology is far too important to be in the hands of a few companies.

I wholeheartedly agree 1000%. Something needs to change this landscape in the US.

Furthermore, the entire open source models being dominated by China is also problematic.

bwfan123yesterday at 4:27 PM

I am not sure why the OP is painting it as a "us-vs-them" - pro or anti-AI ? AI is a tool. Use it if it helps.

I would draw an analogy here between building software and building a home.

When building a home we have a user providing the requirements, the architect/structural engineer providing the blueprint to satisfy the reqs, the civil engineer overseeing the construction, and the mason laying the bricks. Some projects may have a project-manager coordinating these activities.

Building software is similar in many aspects to building a structure. If developers think of themselves as a mason they are limiting their perspective. If AI can help lay the bricks use it ! If it can help with the blueprint or the design use it. It is a fantastic tool in the tool belt of the profession. I think of it as a power-tool and want to keep its batteries charged to use it at any time.

dzongayesterday at 3:08 PM

antirez gave us reddit - but somehow I think the part him and other smart folks who talk about A.I so much is they forget about agency | self-sufficiency.

If A.I writes everything for you - cool, you can produce faster ? but is it really true ? if you're renting capacity ? what if costs go up, now you can't rent anymore - but you can't code anymore, the documentation is no longer there - coz mcp etc assumption that everything will be done by agents then what ?

what about the people that work on messy 'Information Systems' - things like redis - impressive but it's closed loop software just like compilers -

some smart guy back in the 80s - wrote it's always a people problem -

show 1 reply
Ekarosyesterday at 11:28 AM

I think best hope against AI is copy right. That is AI generated software has none. Everyone is free to steal and resell it. And those who generated have zero rights to complain or take legal action.

🔗 View 39 more comments