logoalt Hacker News

wavefrontbakc04/24/20254 repliesview on HN

I think the cost of mistakes is the major driving force behind where you can adopt tools like these. Generating a picture of a chair with five legs? No big deal. Generating supports for a bridge that'll collapse next week? Big problem.

> It will point out things that are unclear, etc. You can go far beyond just micro managing incremental edits to some thing.

When prompted an LLM will also point it out when it's perfectly clear. LLM is just text prediction, not magic


Replies

ben_w04/24/2025

> I think the cost of mistakes is the major driving force behind where you can adopt tools like these. Generating a picture of a chair with five legs? No big deal. Generating supports for a bridge that'll collapse next week? Big problem

Yes, indeed.

But:

Why can LLMs generally write code that even compiles?

While I wouldn't trust current setups, there's no obvious reason why even a mere LLM cannot be used to explore the design space when the output can be simulated to test its suitability as a solution — even in physical systems, this is already done with non-verbal genetic algorithms.

> LLM is just text prediction, not magic

"Sufficiently advanced technology is indistinguishable from magic".

Saying "just text prediction" understates how big a deal that is.

show 2 replies
abe_m04/29/2025

I doubt that an LLM for CAD would be using the free-form generation that seems to lead to the 5-legged chair.

I suspect it will be more of a tool calling layer that is building up the model based on the output of more deterministic tools. "I need this bearing mounted here" -> LLM reads the part number of the bearing, does a RAG search for the bearing properties to get sizes, looks up what type of fits should be used in the scenario, feeds that info into a function that generates the appropriate seat geometry. Rather than trying to morph triangles into sort-of-kind-of a cylindrical face that may or may have the 0.0XXmm interference that is the difference between success or failure of the fitment.

sharemywin04/24/2025

isn't it closer to concept prediction layered over top of text prediction because of the multiple levels? it compresses text into concepts using layers of embeddings and neural encoding then predicts the concept based on multiple areas of attention. then decompresses it to find the correct words to convey the concept.

baq04/24/2025

The text of every Nobel winning physics theory was predicted in someone’s head, too