Generative AI like Midjourney and other similar tools is also using images of real people who didn’t agree to be used for such purposes. But it’s doing so at scale in a huge way, and it’s also profiting off of the use.
So just don’t use Midjourney or other for-profit AI tools? It’s not difficult roll your own Stable-Diffusion (which is open source) and make your own models if you want to be absolutely sure no profit is happening. It’s mostly Python anyway.
Even if you want to argue that someone is being done harm by using a picture from a movie to represent your character, even if you accept that argument as true, it is still actively far less harmful then systems that take these materials for profit, and allow for users to actually put an actor’s likeness in visual poses and scenarios that the actor never performed.
I disagree. For one thing, if you’re using AI art for custom character purposes, then you are not utilizing the entirety of someone’s likeness (unless you’re using something specifically trained on that one person, I guess – but in that case, just don’t use that model). Otherwise, there would be no point in utilizing AI at all; just use a photo. No one will be able to tell who the models were used. Unlike with a photo PB, which is absolutely certain what person is involved.
Secondly, if the model is made for a for-profit system like Midjourney, then they already have the requisite rights and permissions. That’s part of what you’re paying for when you buy a license for Midjourney.
So utilizing a person’s picture without permission wholesale is a hell of a lot worse than using either licensed graphics or minute tokens of an aggregate which no one will recognize anyway.