logoalt Hacker News

kennywinkertoday at 3:57 AM4 repliesview on HN

They took a base model, so something trained on stolen work - and then added a vaneer of non-stolen work. I too would be skeptical of their legal position.


Replies

iso-logitoday at 5:22 AM

I believe a service like this could succeed if the initial base model wasn't Stable Diffusion and wasn't trained internet scrapes without the copyright permissions.

Their solution basically just amounts of "Ethically sourced Styles" which still has all the red tape that a normal text2image model has because majority of the data is still unapproved for use in an AI model.

Businesses didn't want to get wrapped up in a pesudolegal model that really has no better legality than base SD.

protocolturetoday at 5:25 AM

They took a base model, trained on but not reproducing work, so entirely fair with no theft, and then tried to tweak it so it could make money for an artist.

show 1 reply
Kim_Bruningtoday at 5:05 AM

Cite one legal case where an AI company trained on a particular work, and the judge ruled that they quote-stole it-unquote.

show 1 reply
ocdtrekkietoday at 4:57 AM

If anything the legal position is probably the opposite: The law is leaning towards AI training being transformative/fair use and AI generated content not getting any copyright protection at all. So something paying artists for style-rips probably was a net positive for artists, because it's very possible it will end up outright legal to have gen AI rip off artists' styles wholesale.

show 1 reply