> it's hard to believe that HN users would be tired of LLM-related news.
Not hard to believe at all. While I don't flag any posts. I have no interest in LLM related content.
I also actively use AI tools btw. It's just tiring seeing everything with AI suffix including monitors.
I wish hacker news had filters, ... if LLM, AI, or other hyped tech... make it hidden
Whoever finds the next interesting trend to talk about after AI is gonna get a lot of upvotes.
tiring seeing everything with AI suffix
Reminds me of when everything was e-something. Then i-something. Then net-something. Then my-something. Then cyber-something.
You can tell the age of a tech product by which naming trend it attached to itself.
AI, politics, and discussing how HN isn't what it used to be. That's all that's here now. HN isn't what it used to be.
Looks like a majority of it’s all politics and LLMs. I think we’re all as a collective tired of both and want something ‘interesting’ for once to post.
It’s tiring for me because it seems like everyone is just spitting mad about AI, and at every opportunity they breathlessly make sure to let us all know how useless AI is, and how they are indeed the one true programmer who has no need for such base and depraved additions to their workflow. There they are, standing (or maybe hunching over?) bold and proud, on the shores of Algorithmia where no LLM could despoil that one true paragon of software engineering, as if the Platonic forms themselves deigned to come out of the realm of legend merely to demonstrate to us mortals how software ought truly be written.
Anyway, I think AI is pretty neat and use it every day.
I like significant LLM news and even well-justified opinions.
I don’t like seeing essentially the same LLM opinion and justification again and again. This happens with both pro-AI and anti-AI opinions. And some of the justifications (on both sides) are poor. For example, I don’t want to read “LLMs have improved my productivity so much!” without evidence; show me a mostly AI-generated program and code, and explain the (AI-augmented) development process. On the other side, I’ve seen the “LLM inevitablism” argument multiple times, and…I don’t agree with really any of it. It ignores that LLMs are useful (to some extent), so they’ll probably be part of the future no matter what an average reader does; and if LLMs aren’t useful enough to replace everyone and everything (currently they aren’t), they won’t be all of the future, which even the people claiming inevitability are saying (and those who do claim that future LLMs will do everything, you can point to current LLMs and the CEOs of AI companies who, even in their position, are lowering expectations).