logoalt Hacker News

Meaning Machine – Visualize how LLMs break down and simulate meaning

98 pointsby jdspiralyesterday at 10:55 PM23 commentsview on HN

Comments

pamelafoxtoday at 7:40 AM

This looks like a fun visualization of various NLP techniques to parse sentences, but as far as I understand, only the tokenization is relevant to LLMs. Perhaps it's just mis-titled?

I actually worked on a similar tree viewer as part of an NLP project back in 2005, in college, but that was for rule-based machine translation systems. Chapter 4 in the final report: https://www.researchgate.net/profile/Declan-Groves/publicati...

jdspiralyesterday at 10:55 PM

I built a tool called Meaning Machine to let you see how language models "read" your words.

It walks through the core stages — tokenization, POS tagging, dependency parsing, embeddings — and visualizes how meaning gets fragmented and simulated along the way.

Built with Streamlit, spaCy, BERT, and Plotly. It’s fast, interactive, and aimed at anyone curious about how LLMs turn your sentence into structured data.

Would love thoughts and feedback from the HN crowd — especially devs, linguists, or anyone working with or thinking about NLP systems.

GitHub: https://github.com/jdspiral/meaning-machine Live Demo: https://meaning-machine.streamlit.app

show 1 reply
wrstoday at 3:48 AM

Is there evidence that modern LLMs identify parts of speech in an observable way? This explanation sounds more like how we did it in the 90s before deep learning took over.

show 1 reply
dbacartoday at 6:04 AM

:) kinda works I guess. "ValueError: This app has encountered an error. The original error message is redacted to prevent data leaks. Full error details have been recorded in the logs (if you're on Streamlit Cloud, click on 'Manage app' in the lower right of your app)."

show 1 reply
jdspiraltoday at 1:55 PM

So I've taken the feedback and realized that I was misleading on the name and title. I'm updating the project accordingly.

https://tokenizer-machine.streamlit.app/

georgewsingeryesterday at 11:53 PM

Is this really how SOTA LLMs parse our queries? To what extent is this a simplified representation of what they really "see"?

show 2 replies
gitroomtoday at 5:12 AM

Nice seeing tools showing how models break stuff down, tbh I still get kinda lost with all the embeddings and layers but it's wild to peek under the hood like this.

dz0707today at 3:53 AM

I'm wondering if this could turn into some kind of prompt tunning tool - like to detect weak or undesired relationships, "blur" in embeddings, etc.

synapsomorphytoday at 4:10 AM

This is somewhat disingenuous IMO. Language models do NOT explicitly tag parts of speech, or construct grammatical trees of relationships between words [1].

It also feels like motivated reasoning to make them seem dumb because in reality we mostly have no clue what algorithms are running inside LLMs.

> When you or I say "dog", we might recall the feeling of fur, the sound of barking [..] But when a model sees "dog", it sees a vector of numbers

when o3 or Gemini sees "dog", it might recall the feeling of fur, the sound of barking [..] But when a human says "dog", it sees electrical impulses in neurons

The stochastic parrot argument has been had a million times over and this doesn't feel like a substantial contribution. If you think vectors of numbers can never be true meaning then that means either (a) no amount of silicon can ever make a perfect simulation of a human brain, or (b) a perfectly simulated brain would not actually think or feel. Both seem very unlikely to me.

There are much better resources out there if you want to learn our best idea of what algorithms go on inside LLMs [2][3], it's a whole field called mechanistic interpretability, and it's way, way, way more complicated than tagging parts of speech.

[1] Maybe attention learns something like this, but it's doing a whole lot more than just that.

[2] https://transformer-circuits.pub/2025/attribution-graphs/bio...

[3] https://transformer-circuits.pub/2022/toy_model/index.html

P.S. The explainer has em dashes aplenty. I strongly prefer to see disclaimers (even if it's a losing battle) when LLMs are used heavily for writing especially for more technical topics like this.

show 1 reply
igravioustoday at 12:10 PM

Completely misleading title/description

Der_Einzigetoday at 3:42 AM

UMAP is far superior to PCA for these kinds of visualizations and they have a fast GPU version available within CuML for awhile.

sherdil2022today at 1:15 AM

Great job! Do you have any plans to visualize/explain how machine translation - between human languages - works?

show 2 replies
XTXinverseXTYyesterday at 11:50 PM

[flagged]