I've been vibe circuit building since the 1970s, but that's not what this is about, is it? ;-)
Years ago, at Pumping Station One in Chicago, I watched someone struggle with the driving of multiple LEDs from an Arduino in his project. He wondered why the LEDs got dimmer when more than one was lit.
I looked at the original schematic, and what he had built, and noticed a difference. The original design had a resistor on each LED, but he had decided that was a redundancy and refactored it to use a single LED instead. In the case of current flow, this meant the circuit still worked, but that current limiting that resistor provided now was shared across every active LED, leading to the progressive dimming as more LEDs were active.
It turned out his background was in software, where the assumptions are much different as to what is important. Cutting out redundant code is an important skill.
I saw it as a cognitive impedance mismatch being played out in real life.
I assume the same is true for an LLM/AI attempting the same leap.
Why, yes I am.
I know Ben is having some fun, perhaps making a valid point, with the burning component on the breadboard. I think it does underscore a difference between software vibing and hardware vibing—crash vs. fire.
But in fact vibe-breadboarding has drawn me deeper into the electronics hobby. I have learned more about op-amps and analog computing in the past two months in large part thanks to Gemini and ChatGPT pointing the way.
I know now about BAT54S Schottky diodes and how they can protect ADC inputs. I have found better ADC chips than the ones that come pre-soldered on most EDP32 dev boards (and have breadboarded them up with success). These were often problems I didn't know I should solve. (Problems that, for example, YouTube tutorials will disregard because they're demonstrating a constrained environment and are trying to keep it simple for beginners, I suppose.)
To be sure I research what the LLMs propose, but now have the language and a better picture in my mind to know what to search for (how do I protect ADC inputs from over or under voltages?). (Hilariously too, I often end up on the EE Stack Exchange where there is often anything but a concise answer.)
5V USB power, through-hole op-amp chips… I'm not too worried about burning my house down.
If you have solid domain knowledge, LLMs are a force multiplier for electronic design. You just have to have a spider sense for “this is going off the rails”.
Other than that, it does useful circuit review, part selection (or suggestions for alternative parts you didn’t know existed), and is usually usefully skeptical. It’s also great at quick back of the napkin “can I just use a smt ceramic here?” Type calculations, especially handy for roughing out timings and that kind of thing.
If you know what you're doing with electronics design, I've found that leveraging an LLM to help come up with ideas, layout block diagrams, and find parts can be super useful. Integrating Digi-Key or Mouser API support for finding parts pricing and inventory is also super handy. Using the distributor APIs can also allow you to perform natural language search which isn't possible (or isn't easy) through the distributor websites as the LLM can quickly download the datasheet and read it as part of its searching operation to verify if a part should be considered given your requirements.
Not quite the same thing, but recently I want to make adapter clips for connecting powerblocks to a barbell, making them suitable as weights for deadlift/benching. I have fusion 360 experience so I designed something to 3D print as usual, but the issue was that PLA even at 100% infill is pretty unsafe when holding 90lbs blocks.
The logical next step is to use metal, but that's outside of my hobby tools. I found that JLCPCB offered sheet metal fabrication but I had no experience with sheet metal designs. I went to ChatGPT and was actually really impressed by how well it was able to guide me from design to final model file. I received the adapters last week and was really impressed by how nice they turned out.
All of that to say, AI-assisted design is actually lowering the bar of entry for a whole lot of problems and I am quite happy about it.
What would stop us from using something like LTspice to validate the circuit before risking physical components?
This seems ~identical to the situation where we can use a compiler or parser to return syntax errors to the agent in a feedback loop.
I don't know exactly what the tool calling surface would look like, but I feel like this could work.
If you don't know Ben Eater, check out his YouTube channel: https://www.youtube.com/beneater
WARNING: nerd snipe material.
Check out https://silixon.io/ Have used them and they’re pretty legit. Early on but have potential
I ve modified my ZigBee bathroom led light. Replaced the daughterboard with an esp32, integrated a humidity sensor and a presence sensor, another rely and a power circuit, and I now have a bathroom light that lights up automatically when someone is in the room, and turns on the extractor only if the humidity is a ove a certain level.
I ve done all this by taking photos of the circuits and asking Gemini how to do it.
Not LLM, just good old Monte Carlo. I made pedalgen [1] some time ago, it generate random guitar pedals, check the Results section.
In the real world where parts have costs and mistakes have consequences, the GenAI "YOLO" mode doesn't work as well.
I've been using Claude Code to ssh into a Raspberry Pi in a subprocess, so I can chat with the AI from my more powerful machine, and let it write and run the code on the Pi. It's also good for writing scripts and uploading them to an ESP-32 with the Arduino CLI.
As an AI skeptic, I’ve been brought around to using Claude Code to understand a codebase, like when I need to quickly find where something happens through a tower of abstractions. Crucially, this relies on Claude actually searching my codebase using grep. It’s effectively automated guess and check.
I wonder if a SPICE skill would make LLMs safer and more useful in this area. I’m a complete EE newbie, and I am slowly working through The Art of Electronics to learn more. Being able to feed the LLM a circuit diagram—or better yet, a photo of a real circuit!—and have it guess at what it does and then simulate the results to check its work could be a great boon to hands-on learning.
I haven't had much success yet with this. My ratings follow.
Reading and interpreting datasheets: A- (this has gotten a LOT better in the last year)
Give netlist to LLM and ask it to check for errors: C (hit or miss, but useful because catching ANY errors helps)
Give Image to LLM and ask it to check for errors: C (hit or miss)
Design of circuit from description: D- (hallucinates parts, suggests parts for wrong purpose. suggests obsolete parts. Cannot make diagrams. Not an F because its textual descriptions have gotten better. When describing what nodes connect to each other now its not always wrong. You will have to re-check EVERYTHING though, so its usefulness is doubtful)
> Thought for 37s
> ...
> Ah - that makes sense, that's why it's on fire
oh how very relatable, I've had similar moments.
I knew about SEDs (smoke emitting diodes) and LERs (light emitting resistors), but what do you call the inductor version?
Probably works pretty well with atopile.
Previous discussion: https://news.ycombinator.com/item?id=44542880
Semi related: what are your guys workflow to PCB design? I need to build an AFE + BLE MCU for a BCI, and having no EE background, my workflow is KiCAD -> buy components -> breadboard testing -> done?? -> order fully manufactured PCB?
I know nothing...
This brings up a much larger discussion. How bad are LLMs at engineering? Would you trust one to build a high voltage circuit? How about a bridge?
MCP Server for KiCAD:
I wonder how many shots he made to get this perfect one.
I have been working on a tool that aids in circuit tuning: model circuit equations as python functions, the solution space is discrete component values, auto solve for a target spec, build the circuits, record measurements, fit error, repeat until the experiment matches predictions. It adjusts nearly every parameter between tests and converges surprisingly fast. (25% to 2% error in 3 tests for an active band pass filter)
The MVP was hand coded, leaned heavily on sympy, linear fits, and worked for simple circuits. The current PoC only falls back to sympy to invert equations, switches to GPR when convergence stalls, and use a robust differential evolution from scipy for combinatorial search. The MVP works, but now I have a mountain of slop to cleanup and some statistics homework to understand the limitations of these algorithms. It’s nice to validate ideas so quickly though.
We'll eventually have a discarded vibe coded silicon wafer batch. Someone will be insane enough to spend on it.
Good metaphor for 2026..
The system is on fire
yes. but it is not smooth sailing.
If you know Ben Eater, you know he built that circuit on purpose lol.
[dead]
Been working on this exact problem for a while now. The core issue isn't that LLMs are bad at circuits, it's that we're asking them to do novel design when they should be doing selection and integration.
My project (https://phaestus.app/blog) takes a different approach: pre-validated circuit blocks on a fixed 12.7mm grid with standardized bus structures. The LLM picks which blocks you need and where they go, but the actual circuit design was done by humans and tested. No hallucinated resistor values, no creative interpretations of datasheets.
It's the same insight that made software dependencies work. You don't ask ChatGPT to write you a JSON parser from scratch, you ask it which library to use. Hardware should work the same way.
Still WIP and the block library needs expanding, but the constraint-based approach means outputs are manufacturable by construction rather than "probably fine, let's see what catches fire."