Hey HN! We’re Will and Jorge, and we’ve built LAD (Language-Aided Design), a SolidWorks add-in that uses LLMs to create sketches, features, assemblies, and macros from conversational inputs (https://www.trylad.com/).
We come from software engineering backgrounds where tools like Claude Code and Cursor have come to dominate, but when poking around CAD systems a few months back we realized there's no way to go from a text prompt input to a modeling output in any of the major CAD systems. In our testing, the LLMs aren't as good at making 3D objects as they are are writing code, but we think they'll get a lot better in the upcoming months and years.
To bridge this gap, we've created LAD, an add-in in SolidWorks to turn conversational input and uploaded documents/images into parts, assemblies, and macros. It includes:
- Dozens of tools the LLM can call to create sketches, features, and other objects in parts.
- Assembly tools the LLM can call to turn parts into assemblies.
- File system tools the LLM can use to create, save, search, and read SolidWorks files and documentation.
- Macro writing/running tools plus a SolidWorks API documentation search so the LLM can use macros.
- Automatic screenshots and feature tree parsing to provide the LLM context on the current state.
- Checkpointing to roll back unwanted edits and permissioning to determine which commands wait for user permission.
You can try LAD at https://www.trylad.com/ and let us know what features would make it more useful for your work. To be honest, the LLMs aren't great at CAD right now, but we're mostly curious to hear if people would want and use this if it worked well.
I think we are targeting different ends of the market, but I'm trying to do a full product development pipeline end to end. PCB, enclosure, peripherals, firmware.
https://github.com/MichaelAyles/heph/blob/main/blogs/0029blo...
I need to redo this blog, because I did it on a run where the enclosure defaulted to the exploded view, and kicanvas bugged out, either way, the bones of it is working. Next up is to add more subcircuits, do cloud compilation of firmware, kicad_pcb to gerbers.
Then order the first prototype!
Would love something like this for Fusion 360. Being able to just prompt the UI to create or edit objects. It would be cool if (like with coding agents in which you can add context using @filepath), you could use the mouse to click/select context objects for the prompt to execute with
CAD and machining are different fields, true, but I see a lot of the same flaws that Adam Karvonen highlighted in his essay on LLM-aided machining a few months ago:
https://adamkarvonen.github.io/machine_learning/2025/04/13/l...
Do any people with familiarity on what's under the hood know if the latent space produced by most transformer paradigms is only capable of natively simulating 1-d reasoning and has to kludge together any process for figuring geometry with more degrees of freedom?
I've been experimenting with Claude Code and different code-to-cad tools and the best workflow yet has been with Replicad. It allows for realtime rendering in a browser window as Claude does changes to a single code file.
Here's an example I finished just a few minutes ago:
https://github.com/jehna/plant-light-holder/blob/main/src/pl...
I have a SolidWorks Students License™©® and it's the most frustrating piece of software I have ever used. Links to tutorials don't work. And when you do manage to get one, the tutorials are designed for older versions of solidworks and point to buttons that have been moved / don't exist where the tutorial tells you to look in the 2025 version.
The UI is the inverse of whatever intuitive is. It's built on convention after convention after convention. If you understand the shibboleths (and I'm guessing most people take a certified course by a trainer for it?), then it's great, but if you don't, it really sucks to be you (i.e. me).
I would LOVE to try out what you've built, but I am afraid that if the model misinterprets me or makes a mistake, it'll take me longer to debug / correct it than it would to just build it from scratch.
The kinds of things I want to make in solidworks are apparently hard to make in solidworks (arbitrarily / continuously + asymmetrically curved surfaces). I'm assuming that there won't be too many projects like this in the training dataset? How does the LLM handle something that's so out of pocket?
Totally relatable pain- getting LLMs to reliably drive precise CAD operations is surprisingly hard, especially when spatial reasoning and plane/extrude/chamfer decisions go wrong 70%+ of the time.
For people looking at a different angle on the "text to 3D model" problem, I've been playing with https://www.timbr.pro lately. Not trying to replace SolidWorks precision, but great for the early fuzzy "make me something that looks roughly like X" phase before you bring it into real CAD.
This is incredible. Coincidentally, I've just started using Claude Code to model things using OpenSCAD and it's pretty decent. The fact that it can generate preview PNGs, inspect them, and then cycle back to continue iterating is pretty valuable. But the things I make are pretty simple.
My wife was designing a spring-loaded model that fits in our baby walls so that we can make it more modularly attached to our walls and she used Blender. Part of it is that it's harder to make a slightly more complex model with an LLM.
Solidworks is out of our budget for the kind of things we're building but I'm hoping if this stuff is successful, people work on things down the market. Good luck!
The craziest thing I learned clicking on this post is that solidworks has barely changed in the 15 years since I last used it
I'd say let's first build an AI that can reliably read datasheets and technical drawings.
I've tried ChatGpt and Claude on datasheets of electronic components, and I'm sorry to say that they are awful at it.
Before that is fixed, I don't have high hopes for an AI that can generate CAD/EDA models that correctly follow some specification.
> but when poking around CAD systems a few months back we realized there's no way to go from a text prompt input to a modeling output in any of the major CAD systems.
This is exactly what SGS-1 is, and it's better than this approach because it's actually a model trained to generate Breps, not just asking an LLM to write code to do it.
I've been working on this exact same thing with both Solidworks and Altium! There has definitely been a step change in Opus 4.5; I first had it first reverse engineer the Altium file format using a Ghidra MCP and was impressed and how well it worked with decompiled Delphi. Gemini 3 Pro/Flash also make a huge difference with data extraction from PDFs like foot prints or mechanical drawings so we're close to closing the whole loop on several different fields, not just with software engineering.
For the most part they still suck at anything resembling real spatial reasoning but they're capable of doing incredibly monotonous things that most people wouldn't put themselves through like meticulously labeling every pin or putting strict design rule checks on each net or setting up DSN files for autorouter. It even makes the hard routing quite easy because it can set up the DRC using the Saturn calculator so I don't have to deal with that.
If you give them a natural language interface [1] (a CLI in a claude skill, thats it) that you can translate to concrete actions, coordinates, etc. it shines. Opus can prioritize nets for manual vs autorouting, place the major components using language like "middle of board" which I then use another LLM to translate to concrete steps, and just in general do a lot of the annoying things I used to have to do. You can even combine the visual understanding of Gemini with the actions generated by Opus to take it a step further, by having the latter generate instructions and the former generates JSON DSL to that gets executed.
I'm really curious what the defensibility of all these businesses is going to be going forward. I have no plans on entering that business but my limit at this point is I'm not willing to pay more than $200/mo for several Max plans to have dozens of agents running all the time. When it only takes an hour to create a harness that allows Claude to go hog wild with desktop apps there is a LOT of unexplored space but just about anyone who can torrent Solidworks or Altium can figure it out. On the other hand, if it's just a bunch of people bootstrapping, they won't have the same pressure to grow.
Good luck!
[1] Stuff like "place U1 to the left of U4, 50mm away" and the CLI translates that to structured data with absolute coordinates on the PCB. Having the LLM spit out natural language and then using another LLM with structured outputs to translate that to a JSON DSL works very well, including when you need Opus to do stuff like click on screen.
FWIW, I've been testing MCP tools exposed from Autodesk Fusion (https://github.com/AuraFriday/Fusion-360-MCP-Server) and the results are quite promising. It's especially good as a quick starting point.
> the LLMs aren't as good at making 3D objects as they are are writing code
I am still hoping that openSCAD or something similar can grab hold of the community. openSCAD needs some kind of npm as well as imports for mcmaster-carr etc but I think it could work.
Why would I just not use some local desktop Agent?... Like what
it's definitely interesting, but the demo of the coffee mug has a lot of flaws, are there some concrete examples you can think of where the hosted LLMs really shine in this problem?
I think there's a lot of potential for AI in 3D modeling. But I'm not convinced text is the best user interface for it, and current LLMs seem to have a poor understanding of 3D space.