Afrodite-Ohki posted at 7:28 AM Fri, 27 September 2024 -
#4489787pierremeu posted at 11:18 AM Thu, 26 September 2024 - #4489767
Because they borrow the model from someone else. Here you will create it yourself. I just want AI to give real texture to scenes. Nothing else. You don’t seems to understand what I envision. I would like to give my Poser man or woman a real photographic look and the same for nature. Turn your Poser scene into a real picture. But you have created the scene.
To give your Poser man or woman a photographic look, that AI you want would have to be trained on thousands of photographic images, currently all models that do this steal those images without their owners' consent.
This part has been repeated so often that nearly everyone believes it. First off, that is like saying you have stolen every bit of art you have ever seen. Even if your memory were perfect and you could reproduce every image you have ever seen with perfect fidelity, would you say that merely looking at something constitutes theft?
But go back to the AI models. There was a big deal about this last year when, while attempting to stoke fear over generative AI, a group were able to generate an image that matched an image in the training data almost exactly. It would be nearly impossible for this to happen by random chance. Its is most likely that the group knew which image sets the AI had been trained on. That the group knew a specific image was contained in more than one set of images, and that the image's content had been tagged. So, by inputting the tags that this specific image contained and then describing its contents and composition specifically they were in fact able to recreate a strikingly similar image. But they were aided in that they knew the image had been used in training, that it had been used more than once, and they knew the tags it contained when constructing their prompt. For some, those factors do not matter as they did prove they could retrieve an image by constructing the right prompt.
But it's simply not possible for even a 10GB model to contain every detail of every image it has been trained on when that image set was 100s of GB or more.
Also, not all unlicensed use is theft. Fair use law for example allows the use of trademarked characters and even the likenesses of real people for the purpose of parody. Mad magazine can put E.T. or Captain American on their cover without infringing because no reasonable person would confuse the parody with the original work. There is also the possibility that an artist is creating works for their own enjoyment, and are not trying to profit from their creation in any way.
So while I have the same concerns about how AI might be used as others, whether to compete with artists trying to make a living or in creating works that could be mistaken for those of a specific known or unknown artist, I don't think that should be a reason to limit the training data.
There is also the issue of companies such as Getty now owning (via buying other companies' libraries) images that were produced before these models even existed. We think about protecting artists' rights as noble, but so much of what we think we are protecting no longer belongs to those artists. Artists who allowed Marvel to own their work could do nothing to prevent Disney from acquiring the rights to their work by buying Marvel for example. It is companies like Disney who now exist primarily to collect and monetize IP that are most threatened by generative AI, and they want you to fear it too. It threatens their efforts to monopolize content creation.