AI Faces a Barrage of Legal Challenges

Jason Lomberg, North American Editor, PSD



Jason Lomberg, North American Editor, PSD

­Some non-consensual images of Taylor Swift and a 1-hour comedy special from the very deceased George Carlin is bringing the dark side of artificial intelligence to the public consciousness.

And this isn’t late 20th century roboapocalypse agita – for the moment, the biggest AI threat is a plethora of copyright and privacy violations that might be impossible to stop.

The Harvard Business Reviews summed it up rather succinctly with a piece entitled “Generative AI Has an Intellectual Property Problem.”

The generative AI platforms that are surprisingly adept at mimicking artistic styles (and photorealism) are trained on billions of parameters by software pouring over figurative mountains of images and text – some of which is copyrighted.

In a North California District Court Case, Andersen v. Stability AI et al., three artists alleged that an AI platform, Stable Diffusion, “downloaded or otherwise acquired copies of billions of copyrighted images without permission to create Stable Diffusion” in order to “train” the software.

And while Stability’s founder and CEO promised to go through proper legal channels for future versions of his software, he fully admitted that the version in question lacked fully licensed training images.

Despite all that, the court largely ruled in favor of the defendants, but the issue clearly isn’t going away.

Last year, a plaintiff with a bit more clout – Stock photo provider Getty Images – also sued Stability, claiming they misused more than 12 million Getty photos to train its Stable Diffusion AI image-generation system.

More recently, explicit AI-generated Taylor Swift images prompted the White House Press Secretary to say that Congress "should take legislative action."

One of the most astonishing – and possibly, super illegal – AI creations yet was an hour-long comedy special entitled “George Carlin: I’m Glad I’m Dead.” Released on YouTube (and quickly taken down), its creators promoted it as 60 minutes of “new” material, which of course it isn’t.

Course, even if AI tools can legally train themselves on copyrighted works, the bigger question is who owns these generative creations. To whit, are these creations sufficiently “transformative” to count as authorized “derivative works”?

Businesses found to be profiting from creations like this often don’t qualify under the existing “fair use doctrine.”

On the other hand, “criticism, commentary, news reporting, teaching (including multiple copies for classroom use), scholarship, or research” often qualifies.

So which category does material like the George Carlin special fall under? What about media sources using AI tools that both train on copyrighted material and spit out creations almost indistinguishable from the original?

For better or worse, we’re about to witness a deluge of legal challenges that could determine the market value of artificial intelligence.