Honestly this article sounds like someone is unhappy that AI isn’t being deployed/developed “the way I feel it should be done”.
Talent changing companies is bad. Companies making money to pay for the next training run is bad. Consumers getting products they want is bad.
In the author’s view, AI should be advanced in a research lab by altruistic researchers and given directly to other altruistic researchers to advance humanity. It definitely shouldn’t be used by us common folk for fun and personal productivity.
This. The point of whining about VEO 3, “AI being used to create addictive products” really shows that. It's a text-to-video technology. The company has nothing to do if people use it to generate "low quality content". The same way internet companies aren't at fault that large amounts of the web are scams or similar junk.
I feel I could argue the counterpoint. Hijacking the pathways of the human brain that leads to addictive behaviour has the potential to utterly ruins peoples lives. And so talking about it, if you have good intentions, seems like a thing anyone with the heart in the right place would.
Take VEO3 and YouTube integration as an example:
Google made VEO3 and YouTube has shorts and are aware of the data that shows addictive behaviour (i.e. a person sitting down at 11pm, sitting up doing shorts for 3 hours, and then having 5 hours of sleep, before doing shorts on the bus on the way to work) - I am sure there are other negative patterns, but this is one I can confirm from a friend.
If you have data that shows your other distribution platform are being used to an excessive amount, and you create a powerful new AI content generator, is that good for the users?