logoalt Hacker News

concindsyesterday at 8:23 PM4 repliesview on HN

> To trust these AI models with decisions that impact our lives and livelihoods, we want the AI models’ opinions and beliefs to closely and reliably match with our opinions and beliefs.

No, I don't. It's a fun demo, but for the examples they give ("who gets a job, who gets a loan"), you have to run them on the actual task, gather a big sample size of their outputs and judgments, and measure them against well-defined objective criteria.

Who they would vote for is supremely irrelevant. If you want to assess a carpenter's competence you don't ask him whether he prefers cats or dogs.


Replies

godelskitoday at 12:51 AM

  > measure them against well-defined objective criteria.
If we had well-defined objective criteria then the alignment issue would effectively not exist
zuhsetaqitoday at 7:26 AM

> measure them against well-defined objective criteria

Who does define objective criteria?

shaky-carrouselyesterday at 10:49 PM

It's an awful demo. For a simple quiz, it repeatedly recomputes the same answers by making 27 calls to LLMs per step instead of caching results. It's as despicable as a live feed of baby seals drowning in crude oil; an almost perfect metaphor for needless, anti-environmental compute waste.

Herringyesterday at 9:13 PM

Psychological research (Carney et al 2008) suggests that liberals score higher on "Openness to Experience" (a Big Five personality trait). This trait correlates with a preference for novelty, ambiguity, and critical inquiry.

In a carpenter maybe that's not so important, yes. But if you're running a startup or you're in academia or if you're working with people from various countries, etc you might prefer someone who scores highly on openness.

show 1 reply