logoalt Hacker News

hadlockyesterday at 7:08 PM0 repliesview on HN

> AI has failed. >The rumor mill has it that about 95% of generative AI projects in the corporate world are failures.

AI tooling has only just barely reached the point where enterprise CRUD developers can start thinking about. Langchain only reached v1.0.0 in the last 60 days (Q4 2025); OpenAI effectively announced support for MCP in Q2 2025. The spec didn't even approach maturity until Q4 of 2024. Heck most LLMs didn't have support for tools in 2024.

In 2-3 years a lot of these libraries will be part way through their roadmap towards v2.0.0 to fix many of the pain points and fleshing out QOL improvements, and standard patterns evolved for integrating different workflows. Consumer streaming of audio and video on the web was a disaster of a mess until around ~2009 despite browsers having plugins for it going back over a decade. LLMs continue to improve at a rapid rate, but tooling matures more slowly.

Of course previous experiments failed or were abandoned; the technology has been moving faster than the average CRUD developer can implement features. A lot of "cutting edge" technology we put into our product in 2023 are now standard features for the free tier of market leaders like ChatGPT etc. Why bother maintaining a custom fork of 2023-era (effectively stone age) technology when free tier APIs do it better in 2025? MCP might not be the be-all, end-all, but at least it is a standard interface that's at least maintainable in a way that developers of mature software can begin conceiving of integrating it into their product as a permanent feature, rather than a curiosity MVP at the behest of a non technical exec.

A lot of AI-adjacent libraries we've been using finally hit v1.0.0 this year, or creeping close to it; providing stable interfaces for maintainable software. It's time to hit the reset button on "X% of internal AI initiatives failed"