Like the idea, but this is definitely Unicode and not ASCII. It's hard to believe someone finished a piece of this length but still misunderstood, especially when some examples have emoji in them. Alternately, they chose a misleading name on purpose. Why? Someone mentioned TUI, which sidesteps the issue entirely.
I love it conceptually, but I can't get past the abject failure of the right edges of boxes to be properly aligned. Because of a mishmash of non-fixed-width characters (emoji, etc.), each line has a slightly different length and the right edges of boxes are a jagged mess and I can't see anything else until that's cleaned up.
This type of issue comes up in the video game development world. Perhaps in part due to modern engines being off-the-shelf ready to render high quality assets and assets being so available, either internally or from an asset store. It helped push developers into putting high quality assets into games from the start, skipping the "grey box" steps.
I've had it on a number of projects now where high quality assets were pushed into early builds causing execs eyes to light up as they feel like they're seeing a near final product, blind to the issues and under developed systems below. This can start projects off on bad footing because expectations can quickly become skewed and focus can go to the wrong places.
At one studio there was a running joke about trees swaying because someone had decorated an outdoor level with simulated trees. During an early test the execs got so distracted by how much they swayed and if it was too much or too little that they completely ignored the gameplay and content that was supposed to be under review. This issue repeated itself a number of times to the point where meetings would begin with someone declaring "We are not here to review the trees, ignore the trees!"
I've brought this issue up more recently with the advent of AI, which with things like Sora, the act of creating video clips can be stitched together can look like subjectively exciting movie trailers. This now has people declaring that AI movies are around the corner. To me this looks like the similar level of excitement as seeing the trees sway. An AI trailer looks much closer to a shipping product than it should be because the underlying challenges are far from solved; nothing is said about the script, pacing, character development, story etc...
Most ascii/unicode based diagrams spit out by AI have misaligned boxes, similar to the ones generated in the article.
I’m not affiliated, but to clean them up you can use something like ascii-guard (https://github.com/fxstein/ascii-guard) which is a linter that will clean it up. Beats doing it by hand after multiple attempts telling AI to do it and repeatedly fail.
Sorry to be pedantic, but there are a bunch of non-ASCII characters (,↑,) in the mockups and the article contains a lot of AI tropes.
A really interesting article, and I'm likely to give it a shot a work. I'm grateful for it, and yet I found it difficult to get through because of a sense of "LLM style" in the prose.
I won't speculate on whether the post is AI-written or whether the author has adopted quirks from LLM outputs into their own way of writing because it doesn't really matter. Something about this "feeling" in the writing causes me discomfort, and I don't even really know why. It's almost like a tightness in my jaw or a slight ache in my molars.
Every time I read something like, "Not as an aesthetic choice. Not as nostalgia. *But as a thinking tool*" in an article I had until then taken on faith was produced in the voice of a human being feels like a let down. Maybe it's just the sense that I believed I was connecting with another person, albeit indirectly, and then I feel the loss of that. But that's not entirely convincing, because I genuinely found the points this article was making interesting, and no doubt they came originally from the author's mind.
Since this is happening more and more, I'd be interested to hear what others' experiences with encountering LLM-seeming blog posts (especially of inherently interesting underlying content) has been like.
We built https://github.com/trabian/fluxwing-skills to handle some of this use case. For the most part it's now on the shelf but we use the react-ink toolkit for similar things. It's faster and aligns better.
Author here. High-level:
- Problem: AI UI generators are high-fidelity by default → teams bikeshed aesthetics before structure is right.
- Idea: use ASCII as an intentionally low-fidelity “layout spec” to lock hierarchy/flow first.
Why ASCII: - forces abstraction (no colors/fonts/shadows)
- very fast to iterate (seconds)
- pasteable anywhere (Slack/Notion/GitHub)
- editable by anyone
Workflow:
- describe UI → generate ASCII → iterate on structure/states → feed into v0/Lovable/Bolt/etc → polish visuals last
It also facilitates discussion:
- everyone argues about structure/decisions, not pixels
- feedback is concrete (“move this”, “add a section”), not subjective
More advanced setups could integrate user/customer support feedback to automatically propose changes to a spec or PRD, enabling downstream tasks to later produce PRs.
I like the idea but I think it's going to be hard to put this particular genie back in the bottle. As an engineering leader, I prefer low fidelity designs early on, but practically no one else in my company wants that.
Designers have learned figma and it's the de facto tool for them; doing something else is risky for them.
Product leaders want high fidelity. They love the AI tools that let them produce high fidelity prototypes.
Some (but not all) engineers prefer it because it means less decision making for them.
I like to make these kinds of mock ups using https://asciiflow.com. Some of the components from the article paste nicely there.
OK but why not just go back to Balsamiq and make it 'executable'?
You might believe that TUI is neutral, but it really isn't - there's a bajillion of different ways to make a TUI / CLI.
Trouble with this is, it's pretty much LLM-only. I don't want to type out a request for Clause to draw a box for me and describe where, and I don't want to be pasting box-drawing characters. I want to click & drag. This is just boxes, arrows, and labels, which are all WAY faster to make by hand.
> Let me show you what ASCII prototyping looks like.
In my web browser, on your website: broken. None of the things align and it looks really bad.
Recently I was using docling to transform some support site html into markdown and replacing UI images with inline descriptive text. An LLM created all the descriptions. My hope was descriptions like “a two pane..below the hamburger…input field with the value $1.42…” would allow an LLM to understand the UI when given as context in a prompt. Maybe I could just put ASCII renderings inline instead.
I think this a good technique to be familiar with, although in a lot of situations I've achieved similar value by simply feeding the underlying JSON data objects corresponding to the intended UI state back into the coding agent. It doesn't render quite as nicely, but it is often still human-readable, and more importantly both LLM and procedurally interpretable, meaning you can fold the results back into your agentic (and/or conventional testing) development loop. It's not quite as cool, but I think a bit more practical, especially for earlier stage development.
It is news to me that manipulating ASCII art is something AI can do well! I remember this being something LLMs were all particularly horrible at. But I just checked and it seems to work at least with Opus 4.5.
claude(1) with Opus 4.5 seems to be able to take the examples in that article, and handle things like "collapse the sidebar" or "show me what it looks like with an open modal" or "swap the order of the second and third rows". I remember not long ago you'd get back UI mojibake if you asked for this.
Goes to show you really can't rest on your laurels for longer than 3 months with these tools.
I really like this idea, but I get distracted when the vertical bars don't line up!
+--------+
| |
| ASCII! |
| |
+--------+What tools can we actually use to draw ASCII manually if desired?
None are mentioned. E.g I made https://cascii.app for exactly this purpose.
I really like this idea. I’ve seen teams get stuck quibbling about details too early while using Figma. I see how this totally sidesteps that problem, similar to working with analog drawings, but with the advantage that it’s still electronic so you can stuff it into your repo and version it and also feed it into an LLM easily. I’m really curious how the LLM “sees” it. Sure, it’s characters, but it’s not language. Regardless, very cool idea. Can’t wait to give it a try.
Nice ! Very true that if you, for example, show a group a button with a given color, they will waste the meeting discussing the color and not what the button should do, so ASCII is a nice way to not have to do that.
Somewhat related I suppose, for reasons I am often in a situation where I can use matplotlib and make a plot then save it and then SCP it locally to view it. I got tired of that and started making command line plots so I can see the stuff I want right there, it's not insanely detailed but for my needs it's been fine. This def got me thinking about it lol
Great idea, thanks for sharing! Tried your prompts with ChatGPT and Claude than iterated on it. The ASCII doesn't render perfectly in the web interface but looks good when copy/pasted into a text editor. Key benefit: I used to iterate on layout by generating HTML+Tailwind directly, which burns tokens fast. This ASCII approach lets you nail the structure first without the token cost. Much better for free tier usage. Appreciate the writeup!
How come pen & paper are omitted from both article and discussion?
> Examples: UIs in ASCII
The examples are using non-ASCII characters. They also don’t render with a consistent grid on all (any?) browsers.
Maybe they meant plain-text-driven development?
Good idea to build low-fidelity mockups. SVG in my opinion is a better format for this job than text. For instance, in the screenshots from the article, not a single example is properly aligned. That is distracting and makes these assets hard to share.
Neat concept and very inspirational.
Is ascii/unicode text UI the way to go here or is there other UI formats even more suited for LLMs?
Argh.
I may suffer from some kind of PTSD here, but after reading a few lines I can't help but see the patterns of LLM style of writing everywhere in this article.
Wide character width is yet another billion dollar mistake. Impossible to make emojis look right together with monospaced font across devices/programs.
Welcome to the 1970s, when IBM published design guidelines that went along with the then-new 3270 terminal, including trying to keep response times very prompt (under a second) to prevent minds from wandering. This was supposed to allow non-technical users to use the full power of customers without having to master a command-line teletype style interface.
GUIs were supposed to the big huge thing that would let non-technical staff use computers without needing to grasp TUIs.
Spending a lot of time building tools inspired by and using ASCII nowadays...
graph-easy.online printscii.com
This is a tangential point (this post is not really about TUIs; sort of the opposite) and I think lots of people know it already but I only figured it out last week and so can't resist sharing it: agents are good at driving tmux, and with tmux as a "browser", can verify TUI layouts.
So you can draw layouts like this and prompt Claude or Gemini with them, and get back working versions, which to me is space alien technology.