-- Leo's gemini proxy

-- Connecting to gemini.sh0.xyz:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini

Can robot brains break laws human brains can't?


I think it is safe to say that most everyone has heard about ChatGPT, DALL-E, and the handful of other new AI driven services that generate content via human prompts. For those who haven't heard, one of th leaders in image generation, Stability.ai is being sued by multiple groups for copyright infringement due to it's use of millions of online images in the process of teaching its AI to draw.


Getty Images sues Stability.ai


Crash course on how AI drawing works


The principle this specific AI system works on is quite interesting. An image is chosen and provided to the AI along with text of what is being displayed. Over multiple iterations a small amount of noise is added to the image. The AI reviews the noisy image, continually asserting that the image is still of the original object. This continues until the entire image is completely noise. The process is then reversed having the AI slowly replace noise with what it believes the image looked like. Since the starting noise is never the same the AI does not quite generate the same result, many times generating something drastically different.


Think about a picture on an old analog TV. As static starts to increase you gradually lose the picture. But in your head you can still imagine what you're looking at. If you were to trace around the static image and slowly fill in pieces you'd get a similar image but inevitably you'd get some parts wrong.


To further complicate the process, if multiple prompts are given for the image the AI will slowly replace noise but will do so in a way that it believes it is solving both image requests. The system knows how to draw an ice cream cone, and it knows how to draw a dog licking a person's face. So when asked to draw a dog licking an ice cream code the noise is slowly replaced by both concepts until it creates a solution with both images together.


Rather than creating a massive database of full images and directly processes them all on each request (see LAION link below), the "learning" portion of the AI creates latent images which the system later uses when trying to create a concept out of noise. An easy way to think about this is rather than keeping a picture of a face, the system would keep a few eyes, a few noses, mouths, etc. You know that they go together to make a face so construction is just putting pieces together.


So what is the issue?


Stability.ai fully acknowledges that some of the data they used to create the latent images was copyrighted material. They used the LAION data set which contains hundreds of millions of images from across the internet. The data set itself was created specifically for AI systems to learn, allowing computers to say "yes that is a picture of a cat." But with the introduction of AI drawing, the legality of the use for this data in some situations has now come into question.


The copyright holders are suing stating that the AI contains copies of their material and that any work created from those copies should be considered derivative. In copyright law, derivative works must be approved by the original copyright holder. If I write a story and you want to use my character in another story you must first ask me for permission to use it, freely or licensed. Since none of us who use the AI system are asking for their permission, what is generated would violate their copyright.


There is one situation, however, where works can be copied and do not fall under the rules of derivative work. If a work is transformative, that is when the new work adds to the original in a way that creates a new expression, then the copyright holder's authorization is no longer be required. Common examples of this are parodies (adding humor to poke fun and make commentary on the original work), or creating response videos of other's videos on YouTube. Just last year a case came up to the Supreme Court about Andy Worhol's use of a Prince image in one of his iconic silk screen repetitive images. These works do not further the original artist's work, but rather use their works as a prop in a new creative venture.


Can AI make something transformative?


This is an important question. If you look at the images that the AI system used to learn to draw and compare them to the results, in most cases they look nothing alike. The pictures of dogs don't look anything like the result version eating ice cream. The AI took multiple concepts of what a dog looks like and created its own new version. If this AI were to create Worhol's Campbell's Soup Cans it would generate what it computed to be "cans of soup" in "multiple stylistic renditions" but wouldn't look like it had just clipped some coupons and pasted the images together.


As of the moment we, legally and technologically speaking, do not consider Artificial Intelligence to be equivalent to Natural Intelligence. But does this future ruling set precedent for what will come? If you as a human looked at a hundred images of dogs on Getty Images and then drew a picture of a dog no one would consider it derivative unless you attempted to make it look extremely similar to a specific image. In your brain you have memories of parts of all those images you looked at. Can you reconstruct one of the original images fully? Definitely not. But they exist in some form in your grey matter.


Should we consider the latent images stored in Stability.ai to be different than our own memories? Again, like most AI, the learning material is transformed into a set of data that the AI uses but is in no way reminiscent of the source. Are the billions of bits that were calculated to represent a picture of a cat really a copy of those millions of training pictures? Are the fragments of memories of seeing cat pictures make it illegal for you to ever draw a cat?


LAION: Large-scale Artificial Intelligence Open Network


$ published: 2023-01-25 14:47 $

$ tags: #legal, #tech $


-- CC-BY-4.0 jecxjo 2023-01-25


back

-- Response ended

-- Page fetched on Tue May 21 16:23:00 2024