logoalt Hacker News

squigzyesterday at 4:30 AM3 repliesview on HN

Why are we expecting an LLM to make moral choices?


Replies

orbital-decayyesterday at 4:43 AM

The biases and the resulting choices are determined by the developers and the uncontrolled part of the dataset (you can't curate everything), not the model. "Alignment" is a feel-good strawman invented by AI ethicists, as well as "harm" and many others. There are no spherical human values in vacuum to align the model with, they're simply projecting their own ones onto everyone else. Which is good as long as you agree with all of them.

show 2 replies
dalemhurleyyesterday at 7:33 AM

Why are the labs making choices about what adults can read? LLMs still refuse to swear at times.

show 1 reply
lynx97yesterday at 12:00 PM

they don't, or they wouldn't. their owners make these choices for us. Which is at least patronising. Blind users can't even have mildly sexy photos described. Let alone pick a sex worker, in a country where that is legal, by using their published photos. Thats just one example, there are a lot more.

show 1 reply