I did this a few months ago to make a Christmas ornament. There are some rough edges with the process, but for hobby 3D printing, current LLMs with OpenSCAD is a game-changer. I hadn't touched my 3D printer for years until this project.
https://seanmcloughl.in/3d-modeling-with-llms-as-a-cad-luddi...
A recent Ezra Klein Interview[0] mentioned some "AI-Enabled" CAD tools used in China. Does anyone know what tools they might be talking about? I haven't been able to find any open-source tools with similar claims.
>I went with my colleague Keith Bradsher to Zeekr, one of China’s new car companies. We went into the design lab and watched the designer doing a 3D model of one of their new cars, putting it in different contexts — desert, rainforest, beach, different weather conditions.
>And we asked him what software he was using. We thought it was just some traditional CAD design. He said: It’s an open-source A.I. 3D design tool. He said what used to take him three months he now does in three hours.
[0] https://www.nytimes.com/2025/04/15/opinion/ezra-klein-podcas...
I'm a great user for this problem as I just got a 3D printer and I'm no good at modeling. I'm doing tutorials and printing a few things with TinkerCAD now, but my historic visualization sense is not great. I used SketchUp when I had a working Oculus Quest which was very cool but not sure how practical it is.
Unfortunately I tried to generate OpenSCAD a few times to make more complex things and it hasn't been a great experience. I just tried o3 with the prompt "create a cool case for a Pixel 6 Pro in openscad" and, even after a few attempts at fixing, still had a bunch of non-working parts with e.g. the USB-C port in the wrong place, missing or incorrect speaker holes, a design motif for the case not connected to the case, etc.
It reminds me of ChatGPT in late 2022 when it could generate code that worked for simple cases but anything mildly subtle it would randomly mess up. Maybe someone needs to finetune one of the more advanced models on some data / screenshots from Thingiverse or MakerWorld?
Really cool, I'd love to try something like this for quick and simple enclosures. Right now I have some prototype electronics hot glued to a piece of plywood. It would be awesome to give a GenCAD workflow the existing part STLs (if they exist) and have it roughly arrange everything and then create the 3D model for a case.
Maybe there could be a mating/assembly eval in the future that would work towards that?
This reminds me of using llms for LaTex.
They will get you to 80% fast, The last 20% to match what is in your head are hard.
If you never walked the long path you you probably won’t manage to go the last few steps.
Most of the 3D printing model repositories offer financial incentives for model creators, as they are usually owned by manufacturers who want to own as much of the ecosystem as possible. (Makerworld, Printables, etc)
Widespread AI generation obviously enables abuse of those incentives, so it'll be interesting to see how they adjust to this. (It's already a small problem, with modelers using AI renderings that are deceptive in terms of quality)
I think if you could directly tokenize 3D geometry and train an LLM on 3D models directly, you might get somewhere. In order to prompt it, you would need to feed it a 3D model(s), and prompts and it could give you back a different 3D model. This has been done to some extent with generative modeling pre-LLM, but I don't know of any work that takes LLM techniques applied to language and applies them to "tokenizing" 3D geometry. I suspect NVIDIA is probably working very hard on this exact problem for graphics applications.
For mechanical design, 3D modeling is highly integrative, inputs are from a vast array of poorly specified inputs with a high amount of unspecified and fluid contextual knowledge, and outputs are not well defined either. I'm not convinced that mechanical design is particularly well suited to pairing with LLM workflow. Certain aspects, sure. But 3D models and drawings that we consider "well-defined" are still usually quite poorly defined, and from necessity rely heavily on implicit assumptions.
The geometry of machine threads, for example. Are you going to have a big computer specify the position of each of the atoms in the machine thread? Even the most detailed CAD/CAM packages have thread geometry extremely loosely defined, to the point of listing the callout, and not modeling any geometry at all in many cases.
It would just be very difficult to feed enough contextual knowledge into an LLM to have the knowledge it needs to do mechanical design. Therein lies the main problem. And I will stress that it's not a training problem, it's a prompt problem, if that makes sense.
Wow, this entire thing reads like a huge "stay away" sign to me.
The call to action at the end is: "Try out Text-to-CAD in our Modeling App" But that's like the last thing I want to do. Even when I'm working with very experienced professionals, it's really hard to tell them what exactly I want to see changed in their 3D CAD design. That's why they usually export lots of 2D drawings and then I will use a pencil to draw on top of it and then they will manually update the 3D shape to match my drawn request. The improvement that I would like to see in affordable CAD software is that they make it easier to generate section views and ideally the software would be able to back-propagate changes from 2D into the 3D shape. Maybe one day that will be possible with multimodal AI models, but even then the true improvement is going to be in the methods that the AI model uses internally to update the data. But trying to use text? That's like bringing a knife to a gunfight. It's obviously the wrong modality for humans to reason about shapes.
Also, as a more general comment, I am not sure that it is possible to introduce a new CAD tool with only subscription pricing. Typically, an enclosure design will go through multiple variations over multiple production runs in multiple years. That means it's obvious to everyone that you need your CAD software to continue working for years into the future. For a behemoth like Autodesk, that is believable. For a startup with a six month burn rate, it's not. That's why people treat startups with subscription pricing like vaporware.
As a huge OpenSCAD fan and everyday Cursor user, it seems obvious to me that there's a huge opportunity _if_ we can improve the baseline OpenSCAD code quality.
If the model could plan ahead well, set up good functions, pull from standard libraries, etc., it would be instantly better than most humans.
If it had a sense of real-world applications, physics, etc., well, it would be superhuman.
Is anyone working on this right now? If so I'd love to contribute.
I get that CAD interfaces are terrible - but if I imagine the technological utopia of the future - using the english language as the interface sounds terrible no matter how well you do it. Unless you are paraplegic and speaking is your only means of manipulating the world.
I much prefer the direction of sculpting with my hands in VR, pulling the dimensions out with a pinch, snapping things parellel with my fine motor control. Or sketching on an iPad, just dragging a sketch to extrude is to it's normal, etc etc. These UIs could be vastly improved.
I get that LLMs are amazing lately, but perhaps keep them somewhere under the hood where I never need to speak to them. My hands are bored and capable of a very high bandwidth of precise communication.
I wanted to use this process (LLM -> OpenSCAD) a few months ago to create custom server rack brackets (ears) for externally mounting water-cooling radiator of the server I am building. I ended up learning about 3D printing, using SolidWorks (it has great built-in tutorials) and did this the old fashioned way. This process may work for refining parts against very well known objects, i.e. iPhone, but the amount of refinement, back and forth and verbosity needed, the low acceptance rate - I do not believe we're close to using these tools for CAD.
As someone who enjoys doing CAD and spends a fair amount of time doing contortions to get OpenSCAD to do relatively simple things like bevels and chamfers, I’d say this is interesting because of the model ranking, but ultimately pointless because LLMs do not really have a mental model of things like CSG and actual relative positioning - they can do simple boxes and cylinders with mounting holes, but that’s about it.
It would be interesting to see how far one could get with fine tuning and RL. One could for example take free scad models, give 2D pictures from different angles and ask LLM to recreate the 3D design in scad language. Then compare volumetrically how close they are and provide feedback.
Once that is done then ask LLM to create a prompt and compare outputs etc..
I 3D printed a replacement screw cap for something that GPT-4o designed for me with OpenSCAD a few months ago. It worked very well and the resulting code was easy to tweak.
Good to hear that newer models are getting better at this. With evals and RL feedback loops, I suspect it's the kind of thing that LLMs will get very good at.
Vision language models can also improve their 3D model generation if you give them renders of the output: "Generating CAD Code with Vision-Language Models for 3D Designs" https://arxiv.org/html/2410.05340v2
OpenSCAD is primitive. There are many libraries that may give LLMs a boost. https://openscad.org/libraries.html
Your prompts are very long for how simple the models are, using a CAD package would be far more productive.
I can see AI being used to generate geometry, but not a text based one, it would have to be able to reason with 3d forms and do differential geometry.
You might be able to get somewhere by training an LLM to make models with a DSL for Open Cascade, or any other sufficiently powerful modelling kernel. Then you could train the AI to make query based commands, such as:
// places a threaded hole at every corner of the top surface (maybe this is an enclosure)
CUT hole(10mm,m3,threaded) LOCATIONS surfaces().parallel(Z).first().inset(10).outside_corners()
This has a better chance of being robust as the LLM would just have to remember common patterns, rather than manually placing holes in 3d space, which is much harder.I think it would be better to use a an existing programming language for CAD so that the LLM has more training data already.
Therefore im working on LuaCAD (https://github.com/ad-si/LuaCAD), which is a Lua frontend for OpenSCAD.
It's so cool to see this post, and so many other commenters with similar projects.
I had the same thought recently and designed a flexible bracelet for pi Day using openscad and a mix of some the major AI providers. I'm cool to see other people are doing similar projects. I'm surprised how well I can do basic shapes and open scad with these AI assistants.
Makes you wonder if there is a place in the pipeline for generating G-code (motion commands that run CNC mills, 3d printers etc.)
Being just a domestic 3d printer enthousiast I have no idea what the real world issues are in manufacting with CNC mills; i'd personally enjoy an AI telling me which of the 1000 possible combinations of line width, infill %, temperatures, speeds, wall generation params etc. to use for a given print.
I tried this a few months back with claude 3.5 writing cadquery code in cline, with render photos for feedback. I got it to model a few simple things like terraforming mars city fairly nicely. However it still involved a fair bit of coaching. I wrote a simple script to automate the process more but it went off the rails too often.
I wonder if the models improved image understanding also lead to better spatial understanding.
Really curious how you got these tools to talk so elegantly to each-other? Is this an MCP implementation or?
> To my surprise, Zoo’s API didn’t perform particularly well in comparison to LLMs generating STLs by creating OpenSCAD
This is interesting. As foundational models get better and better, does having proprietary data lose its defensibility more?
I spent #Marchintosh trying to get a usable Apple EMate Stylus out of ChatGPT and OpenSCAD.
I took measurements.
I provided contours.
Still have a long way to go. https://github.com/itomato/EmateWand
Oh, poor itty-bitty Skynet won't be able to create Terminators without mastering CAD modelling, so let us totally teach it
About a year ago I had a 2D drawing of a relatively simple, I uploaded it to chatgpt and asked it to model it in cadquery. It required some coaching and manual post processing but it was able to do it. I have since moved to solvespace since even after using cadquery for years I was spending 50% of the time finding some weird structure to continue my drawing from. Solvespace is simply much more productive for me.
Wow! As someone that's written openscad scripts manually I can get real excited about this.
While this all is super cool, and I don't want to downplay TFAs efforts - I'm kind of at my wit's end here. You've gotten me at a bad time.
I use a computer every day to do electrical (and sometimes, poorly) mechanical CAD. Getting frustrated with/at software is a daily occurrence, but CAD software is some of the worst.
Egg on my face, maybe, but for MCAD I use Fusion360. It's constantly bugging out - and I'm not even talking about the actual modeling workflow or tools! I'll get windows disappearing and floating above other windows. It won't work if you don't install updates within a few days of their release. If I go offline it pops banners in my face. Sometimes, duplicate copies of itself open up, presumably because the updater put a new binary somewhere on my machine that spotlight indexed while I had the previous version running... Sometimes you can't delete files in the cloud because they're referenced by... other deleted files?? A few weeks ago I installed an update like a good boy, and it literally broke the functionality of _being able to click on things_ in the model tree.
On the ECAD side, I use KiCAD for most personal/professional projects these days - very, very few complaints there, actually. However, a new client is using Altium, so here we go... My primary machine is a M3 Max MBP, and I know it's running through the ARM translation layer inside Parallels, but Altium was completely unusable! Opening the component library or moving the explorer window took multiple seconds.
I dusted off an X1 Carbon, which admittedly is 6-ish years old, but it was even worse there! You must understand, for schematic editing, this software's primary use is to drag rectangles around and connect them with lines. How difficult can this be? I had to get a new Windows machine just to be able to navigate around Altium without constant stuttering. Honestly, even on this new machine it's still slower than I'd like. This software is upwards of $5k a year for a single seat license! [1]
I grew up using Macromedia Studio 8, which I installed from a box of CDs, watching my father use his Pentium 4 machine to make complex block diagrams in Visio 2002. In the mid 2000s he was laying out PCBs in PADS without any issues on a laptop! Now a single tab of Lucidchart takes more memory than my old PC could even address, I can't resize the godforsaken library viewer window in Altium on a machine with 16 cores, and if I want to change the settings on my mouse I have to sit through Logitech asking me if I want to log in and join the mouse community to share usage tips? What the hell is going on!?
So, forgive me, and not to go full Casey Muratori, but when I see companies like AdamCAD trying to push this new paradigm, I just can't handle it. Can we please, please, please, just go back to making decent software that enables me to do my work without constant hassle? I don't give a single damn about the AI features if I can't count on the software opening and being usable day in and day out. I lose actual time each and every week to dealing with software issues and I'm so so over it.
[1] $5k for a single site license, of which to attain, you'll have to sit in a sales meeting for a half hour, during which the sales rep tells you that - technically the EULA establishes a 0.5 mile radius for your "single site" but - don't worry - using it at home 3.5 miles away is totally okay, he's not going to make you buy two $5k licenses - thank god!
I've done this, and printed actual models AIs generated. In my experience Grok does the best job with this - it one shots even the more elaborate designs (with thinking). Gemini often screws up, but it sometimes can (get this!) figure things out if you show it what the errors are, as a screenshot. This in particular gives me hope that some kind of RL loop can be built around this. OpenAI models screw up and can't fix the errors (common symptom: generate slightly different model with the same exact flaws). DeepSeek is about at the same level at OpenSCAD as OpenAI. I have not tried Claude.
ah yes, let’s interface all our programs with text
this is one of the more compelling "LLM meets real-world tool" use cases i've seen. openSCAD makes a great testbed since it's text-based and deterministic, but i wonder what the limits are once you get into more complex assemblies or freeform surfacing.
curious if the real unlock long-term will come from hybrid workflows, LLMs proposing parameterized primitives, humans refining them in UI, then LLMs iterating on feedback. kind of like pair programming, but for CAD.
The future: "and I want a 3mm hole in one side of the plate. No the other side. No, not like that, at the bottom. Now make it 10mm from the other hole. No the other hole. No, up not sideways. Wait, which way is up? Never mind, I'll do it myself."
I'm having trouble understanding why you would want to do this. A good interface between what I want and the model I will make is to draw a picture, not write an essay. This is already (more or less) how Solidworks operates. AI might be able to turn my napkin sketch into a model, but I would still need to draw something, and I'm not good at drawing.
The bottleneck continues to be having a good enough description to make what you want. I have serious doubts that even a skilled person will be able to do it efficiently with text alone. Some combo of drawing and point+click would be much better.
This would be useful for short enough tasks like "change all the #6-32 threads to M3" though. To do so without breaking the feature tree would be quite impressive.