The “Hacker News - Complete Archive” on Hugging Face,[1] recently popped up here. “The data is stored as monthly Parquet files sorted by item ID, making it straightforward to query with DuckDB, load with the datasets library, or process with any tool that reads Parquet.”
Out of curiosity, I tinkered with it using Claude to see trends and patterns (I did find a few embarrassing things about me!).
I don't quite understand how Modolap differs from just asking AI to use any other OLAP engine? Both your website and the github readme just emphasise that it's idiosyncratic and your personal approach, without explaining what that is or why anyone should care.
I'm kind of surprised that postgres was quite that dominated by mongodb back in the day. I remember the mongo fever, but I always thought postgres held reasonable market share. I guess it was other SQL dbs back then, I guess MySQL was still viable.
Nobody who actually codes in that language ever calls it 'Golang'
That last chart showing the average comment length shows a clear negative downtrend, especially in recent months. I wonder why that is.
When searching for references to Go, what does it actually look for? "Go" is a relatively common word, and I hardly see anyone referring to it as Golang
5% of all comments mention Claude code?
Am I reading that right?
I really love codex. The price/value comparison to claude code is at least from my opinion much better.
Do not estimate/plot DAUs/MAUs, it's not a pretty picture :'(.
HN data is open? Under what conditions it's distributed?
[dead]
[dead]
[dead]
[flagged]
I've done this kind of thing many times with codex and sqlite, and it works very well. It's one prompt that looks something like this:
- inspect and understand the downloaded data in directory /path/..., then come up with an sqlite data model for doing detailed analytics and ingest everything into an sqlite db in data.sqlite, and document the model in model.md.
Then you can query the database adhoc pretty easily with codex prompts (and also generate PDF graphs as needed.)
I typically use the highest reasoning level for the initial prompt, and as I get deeper into the data, continuously improve on the model, indexes, etc., and just have codex handle any data migration.