Every example I thought "yeah, this is cool, but I can see there's space for improvement" — and lo! did the author satisfy my curiosity and improve his technique further.
Bravo, beautiful article! The rest of this blog is at this same level of depth, worth a sub: https://alexharri.com/blog
The contrast enhancement seems simpler to perform with an unsharp mask in the continuous image.
It probably has a different looking result, though.
Great breakdown and visuals. Most ASCII filters do not account for glyph shape.
It reminds me of how chafa uses an 8x8 bitmap for each glyph: https://github.com/hpjansson/chafa/blob/master/chafa/interna...
There's a lot of nitty gritty concerns I haven't dug into: how to make it fast, how to handle colorspaces, or like the author mentions, how to exaggerate contrast for certain scenes. But I think 99% of the time, it will be hard to beat chafa. Such a good library.
EDIT - a gallery of (Unicode-heavy) examples, in case you haven't seen chafa yet: https://hpjansson.org/chafa/gallery/
> To increase the contrast of our sampling vector, we might raise each component of the vector to the power of some exponent.
How do you arrive at that? It's presented like it's a natural conclusion, but if I was trying to adjust contrast... I don't see the connection.
What a great post. There is an element of ascii rendering in a pet project of mine and I’m definitely going to try and integrate this work. From great constraints comes great creativity.
Well-written post. Very interesting, especially the interactive widgets.
that was so brilliant! i loved it! thanks for putting it out :)
Nice! Now add colors and we can finally play Doom on the command line.
More seriously, using colors (not trivial probably, as it adds another dimension), and some select Unicode characters, this could produce really fancy renderings in consoles!
Tell me someone has turned this into a library we can use
It's important to note that the approach described focuses on giving fast results, not the best results.
Simply trying every character and considering their entire bitmap, and keeping the character that reduces the distance to the target gives better results, at the cost of more CPU.
This is a well known problem because early computers with monitors used to only be able to display characters.
At some point we were able to define custom character bitmap, but not enough custom characters to cover the entire screen, so the problem became more complex. Which new character do you create to reproduce an image optimally?
And separately we could choose the foreground/background color of individual characters, which opened up more possibilities.
Next up: proportional fonts and font weights?
There is already a C library that does realtime ascii rendering using décision trees:
GitHub: https://github.com/symisc/ascii_art/blob/master/README.md Docs: https://pixlab.io/art