logoalt Hacker News

somewhatgoatedtoday at 11:53 AM2 repliesview on HN

Rules and consequences seem to apply to humans in a similar way as prompts and harnesses govern LLMs. The greater the level of power a human possesses the less they are governed by these restraints, this doesnt apply to LLMs so at least in that aspect they are an improvement. But yea we can’t really punish or inflict pain on them - this seems like a problem


Replies

steveBK123today at 12:28 PM

I think a simpler model is variety.

There are billions of people, you can interview/hire/fire until you get the right match.

There are 2? frontier LLM providers. 5? if you are more generous / ok with more trailing edge.

Everyone thought OpenAI was great, until Claude got better in Q1 and they switched to Anthropic, and then Codex got better and a good chunk moved back to OpenAI.. Seems kind of binary currently.

handoflixuetoday at 11:54 AM

Why does it matter if you can inflict pain on them? Is that normal and acceptable in your line of work?

show 1 reply