In a recent filing with the UK Parliament, OpenAI stated: “It would be impossible to train today’s leading AI models without using copyrighted materials. Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens.” Then, more recently, “If you were to do private negotiations for every piece of content that you need to train one of these models,” they wouldn’t exist… “Inherent in these statements is the assumption that the world is demanding AI created works, they need to exist and somehow artists are standing in the way. There are 100 million music tracks on Spotify, do we really need more? There are 3,600 movies and 1,800 television shows on NetFlix, do we really need more? Nova Southeastern University's Copyright Officer, Stephen Carlisle, J.D., takes a look at the big picture, and discovers AI is just another bad business model proposed by big tech companies, who expect artists to subsidize their bad judgement.
I agree that the “fruits” of neural networks come from the proletarian artist, because without them, there cannot be any content made of this magnitude. The stunning images created by AI inherently rely on the labor of artists, so its commodity-value—its visual appeal—is thus identical to the labor-value that was scraped into its dataset. In layman’s terms, the artists made the art, and AI imagery looks good because it was taken from that art.