logoalt Hacker News

_pdp_yesterday at 10:40 PM5 repliesview on HN

AI ultimately breaks the social contract.

Sure, people are not perfect, but there are established common values that we don't need to convey in a prompt.

With AI, despite its usefulness, you are never sure if it understands these values. That might be somewhat embedded in the training data, but we all know these properties are much more swayable and unpredictable than those of a human.

It was never about the LLM to begin with.

If Linus Torvalds makes a contribution to the Linux kernel without actually writing the code himself but assigns it to a coding assistant, for better or worse I will 100% accept it on face value. This is because I trust his judgment (I accept that he is fallible as any other human). But if an unknown contributor does the same, even though the code produced is ultimately high quality, you would think twice before merging.

I mean, we already see this in various GitHub projects. There are open-source solutions that whitelist known contributors and it appears that GitHub might be allowing you to control this too.

https://github.com/orgs/community/discussions/185387


Replies

pear01yesterday at 11:26 PM

Prioritizing or deferring to existing contributors happens in pretty much every human endeavor.

As you point out this of course predates the age of LLM, in many ways it's basic human tribal behavior.

This does have its own set of costs and limitations however. Judgement is hard to measure. Humans create sorting bonds that may optimize for prestige or personal ties over strict qualifications or ability. The tribe is useful, but it can also be ugly. Perhaps in a not too distant future, in some domains or projects these sorts of instincts will be rendered obsolete by projects willing to accept any contribution that satisfies enough constraints, thereby trading human judgement for the desired mix of velocity and safety. Perhaps as the agents themselves improve this tension becomes less an act of external constraint but an internal guide. And what would this be, if not a simulation of judgement itself?

You could also do it in stages, ie have a delegated agent promote people to some purgatory where there is at least some hope of human intervention to attain the same rights and privileges as pre-existing contributors, that is if said agent deems your attempt worthy enough. Or maybe to fight spam an earnest contributor will have to fork over some digital currency, to essentially pay the cost of requesting admission.

All of these scenarios are rather familiar in terms of the history of human social arrangements.

That is just to say, there is no destruction of the social contract here. Only another incremental evolution.

throwaway27448yesterday at 11:02 PM

An agent is still attached to an accountable human. If it is not, ignore it.

show 2 replies
_pdp_yesterday at 11:03 PM

I forgot to mention why I brought up the idea of who is making the contribution rather than how (i.e., through an LLM).

Right now, the biggest issue open-source maintainers are facing is an ever-increasing supply of PRs. Before coding assistants, those PRs didn't get pushed not because they were never written (although obviously there were fewer in quantity) but because contributors were conscious of how their contributions might be perceived. In many cases, the changes never saw the light of day outside of the fork.

LLMs don't second-guess whether a change is worth submitting, and they certainly don't feel the social pressure of how their contribution might be received. The filter is completely absent.

So I don't think the question is whether machine-generated code is low quality at all, because that is hard to judge, and frankly coding assistants can certainly produce high-quality code (with guidance). The question is who made the contribution. With rising volumes, we will see an increasing amount of rejections.

By the way, we do this too internally. We have a script that deletes LLM-generated PRs automatically after some time. It is just easier and more cost-effective than reviewing the contribution. Also, PRs get rejected for the smallest of reasons.

If it doesn't pass the smell test moments after the link is opened, it get's deleted.

show 1 reply
bluefirebrandyesterday at 10:56 PM

> AI ultimately breaks the social contract

Business schools teach that breaking the social contract is a disruption opportunity for growth, not a negative,

The Hacker in Hacker News refers to "growth hacking" now, not hacking code

show 1 reply
yabutlivnWoodsyesterday at 10:48 PM

Generational churn breaks social contract.

You all using Latin and believing in the old Greek gods to honor the dead?

Muricans still owning slaves from Africa?

All ways in which old social contracts were broken at one point.

We are not VHS cassettes with an obligation to play out a fuzzy memory of history.