logoalt Hacker News

dns_snektoday at 2:13 PM2 repliesview on HN

For the millionth time, the reason we have copyright in the first place is to encourage creation of original creative works. This is clearly stated in the US constitution (and similar phrasing is found in the relevant legal texts of other jurisdictions).

You can apply obsolete legal tests that have been used to enforce this principle all day long, but the central question remains: Does generative AI encourage creation of original creative works?

If the answer is "no", which it clearly is, then whatever laws and legal tests exist to enforce IP rights need to be amended - or the constitution does.


Replies

Saline9515today at 2:49 PM

The problem with your definition is that the pre-copyright history is a good example of why we don't need copyright. The US applied copyright laws very late (similar to what China did recently), which led it to be the nation where the citizens read the most in the 19th century. This then led to the cultural explosion we know.

Free reproduction of "original creative works" fuels original creation, too, while creating tight monopolies over intellectual works and universes has led to a decreased creativity around them.

See the dire state of the US film making industry, as an example. Or the vast amount of bizarre lawsuits such as the one for the "Bittersweet Symphony".

fc417fc802today at 3:09 PM

I agree, but I think there's a second relevant question as well: Does generative AI output original creative works of its own? Obviously the goal here should be maximizing societal benefit. Specifically encouraging the creation of works that we (as a group) find directly useful or otherwise desirable. At least to my mind, human exceptionalism is an explicit non-goal.

I'm already finding the ability of LLMs to synthesize useful descriptions across disparate sources of raw data to be immensely useful. If that puts (for example) scientific textbook authors out of a job I'm not at all sure that would prove to be a detriment to society on the whole. I'm fairly certain that LLMs are already doing better at meeting the needs of the reader than most of the predatory electronic textbook models I was exposed to in university.

> If the answer is "no", which it clearly is,

Why are you so certain of this? It clearly breaks many (most?) of the existing revenue models to at least some extent. But we don't care about the existing revenue models per se. What we care about is long term sustainable creation across society as a whole. So are consumer needs being met in a sustainable manner? Clearly generative AI is (ever increasingly) capable of the former; it's the latter that requires examination.