The article centers on the ethical implications of using artists' work to train AI models without their consent. It argues that this practice inflicts 'moral injury' on artists, a condition where individuals are forced to violate their values. This goes beyond the legal arguments of copyright infringement and fair use.
The core argument is that forcing artists to contribute to AI development against their will causes moral injury. This is compared to situations like war veterans experiencing trauma from actions against their conscience, or medical professionals facing ethical dilemmas. The article suggests that this psychological harm is a more compelling argument against the current practice than simple copyright concerns.
The piece refutes the claims that using copyrighted material for AI training is 'fair use' or a form of acceptable remixing. It argues that the use of an artist's work for AI, even if legally permissible, might cause moral injury due to its potential to harm the artist's sense of creative agency and overall contribute to the ‘enshittification’ of the world.
The article positions the debate within a larger discussion on the ethical implications of AI development. It highlights the potential risks of unchecked AI progress, questioning whether the benefits outweigh the harms. The Luddites are referenced to contrast the current arguments against AI, emphasizing that opposition to AI isn't inherently anti-progress, but rather a concern over how technology is used and who it benefits.
The article concludes that artists should focus on framing their opposition to AI art in terms of moral injury. This, the authors argue, is a more powerful and less easily dismissed argument than legal or economic considerations alone.