logoalt Hacker News

drakeballewtoday at 12:20 AM4 repliesview on HN

This is a beautiful piece of work. The actual data or outputs seem to be more or less...trash? Maybe too strong a word. But perhaps you are outsourcing too much critical thought to a statistical model. We are all guilty of it. But some of these are egregious, obviously referential LLM dog. The world has more going on than whatever these models seem to believe.

Edit/update: if you are looking for the phantom thread between texts, believe me that an LLM cannot achieve it. I have interrogated the most advanced models for hours, and they cannot do the task to any sort of satisfactory end that a smoked-out half-asleep college freshman could. The models don't have sufficient capacity...yet.


Replies

liqilin1567today at 2:53 AM

When I saw that the trail goes through just one word like "Us/Them", "fictions" I thought it might be more useful if the trail went through concepts.

show 1 reply
rtgfhyujtoday at 7:01 AM

give it a more thorough look maybe?

https://trails.pieterma.es/trail/collective-brain/ is great

show 3 replies
baxtrtoday at 1:40 PM

I checked 2-3 trails and have to agree.

Take for example the OODA loop. How are the connections made here of any use? Seems like the words are semantically related but the concept are not. And even if they are, so what?

I am missing the so what.

Now imagine a human had read all these books. It would have come up with something new, I’m pretty sure about that.

https://trails.pieterma.es/trail/tempo-gradient/

show 1 reply
what-the-grumptoday at 2:10 AM

Build a rag with significant amount of text, extract it by key word topic, place, date, name, etc.

… realize that it’s nonsense and the LLM is not smart enough to figure out much without a reranker and a ton of technology that tells it what to do with the data.

You can run any vector query against a rag and you are guaranteed a response. With chunks that are unrelated any way.

show 1 reply