logoalt Hacker News

embedding-shapeyesterday at 11:52 PM3 repliesview on HN

I guess they do "see" but more like "see an explanation of the image", not "see" as in experience visually. They're really bad at details and perfection when it comes to images, and doesn't understand things like visual hierarchy, affordances and other fundamental design concepts. Most of them are able to describe those things with letters, but doesn't seem to actually fundamentally grasp it when asking it to do UIs even when mentioning these things.

Try doing 100% vibe-coding with an agent and loosely specify what kind of application you want, and observe how the resulting UI and UX is a complete mess, unless you specify exactly how the UI and UX should work in practice.

If they actually had spatial understanding, together with being able to visually experience images, then they'd probably be able to build proper UI/UX from the get go, but since they only could describe what those things are, you end up with the messes even the current SOTAs produce.


Replies

spongebobstoestoday at 12:57 AM

the models can accept images directly as tokens. not a description of an image, the actual image itself.

yes, the visual intelligence is limited, but they do actually have vision capabilities.

stingraycharlestoday at 5:12 AM

> I guess they do "see" but more like "see an explanation of the image", not "see" as in experience visually.

Images are tokenized and fed to the exact same model, they can “visually inspect” images, eg “find the 2 differences between two images” and “where’s Waldo”-style things.

So your mental model that they see descriptions is inaccurate.

marcus_holmestoday at 12:15 AM

This is my experience too, but with all other aspects of the application. If you only loosely describe it, it comes out as a mess. You have to know what you're building to get the LLM to actually build something decent. I don't think this is purely a visual or design constraint.

show 1 reply