AI in Arts and Entertainment: A Double-Edged Sword

Legal expert Susan Abramovitch and Professor Juan Noguera discuss the impacts of AI on media, arts, and entertainment.
Image design by Connor Peck for DWF. All rights reserved.

Debating the Risks, Rewards, and Considerations of Using Generative Artificial Intelligence

By Susan Abramovitch, Partner and Head, Entertainment & Sports Law, Gowling WLG, Matt Hervey, Head of Artificial Intelligence Law, Gowling WLG, Juan Noguera, Assistant Professor, Rochester Institute of Technology, and Kentaro Toyama, Professor, University of Michigan

Legal expert Susan Abramovitch and Professor Juan Noguera discuss the impacts of AI on media, arts, and entertainment.

The Legal Complexities of AI

By Susan Abramovitch and Matt Hervey, Gowling WLG

From the AI-generated “deep fake” collaboration between “Drake” and “The Weeknd” to AI-enhanced virtual realities, it is undeniable how powerful AI can be. In nearly every entertainment sector, developers have harnessed AI to enhance production efficiency and user experiences in awe-inducing ways. We are optimistic as to the possibilities to which AI provides in the arts and entertainment, together with an acknowledgment of the inevitable (and already occurring) disruption, which needs to be addressed from a legal perspective by government and/or the courts. We need to identify (1) what is and what is not known from a legal perspective and (2) the social and ethical implications of AI in the entertainment industries requiring political consensus-making.

Groundbreaking technological developments inevitably create legal disruption; this has been proven again and again in the entertainment industries. Since these developments spur economic growth, enhance efficiencies, disrupt markets, and displace jobs, significant legal and ethical questions follow. The Writers’ Guild of America (WGA) strike illustrates how AI is shaking up the film and television industries. Both sides agree that the use of artificial intelligence is both beneficial and inevitable. However, they do not yet agree on the exact parameters of its use. Wherever they land, it will help inform best practices in the entertainment industry and perhaps settle some debates over labor and employment more generally.

Ambiguities Around AI and Copyrights  

Questions surrounding generative AI and copyright are center stage in the intellectual property arena. Current generative AI models are trained on billions of copyrighted works. The question before many courts is whether the unauthorized use of such works to train generative AI models—and the work that the models produce—amounts to copyright infringement. We anticipate these cases will focus on whether there is a statutory defense (in the US, the “fair use” defense) that allows AI to be trained on copyright works without permission. However, even if a decision is reached in one jurisdiction, uncertainty will persist because copyright law, and especially exceptions to infringement, varies greatly between countries.

Generative AI can also mimic real people, raising questions of rights in their distinctive features such as a name, likeness, voice, or unique pose. In the United States, artists have claimed that the creators of a generative AI have profited off their distinct artistic style, infringing the artists’ publicity rights by advertising the power to produce images “in the style of” named artists. The extent and form of such rights are unharmonized between jurisdictions, relying variously on specific “personality” rights, constitutional and human rights, unfair competition, or passing off.

These uncertainties present potentially far-reaching liability risks for the entertainment industry. Consider a production company using AI-generated music in a television show. Unknown to the company, the music may infringe copyright works used to train the AI and, in most jurisdictions, AI outputs, such as the music for the show, are not protected by copyright. The company may be liable under agreements with third parties, such as distributors and broadcasters, if it has represented and warranted that the production and its underlying elements do not infringe third-party rights. If the company has misrepresented the chain of title in the music, its errors and omissions insurance policy could become vulnerable.

Who Is Responsible for Copyright Infringements?  

If AI-generated content infringes third-party rights, who should be held accountable? The individual who used the AI to generate the creative work? Or the company that created and trained the AI model? What if an AI-generated song promotes violence and hate crimes through memorable lyrics and a catchy tune? Should the user and/or the artificial intelligence company be held liable? Should lawmakers impose regulations that limit specific AI outputs? If so, where do we draw the line on those restrictions? How should the entertainment industry, which has always promoted freedom of expression, balance this core value with regulating harmful AI-generated content?

These are only a few of the questions that require immediate legal and political attention. While we remain incredibly optimistic about the potential use of AI in the entertainment industry, we believe that our ability to capitalize on this technology is contingent on our capacity to manage the risks. This will require education, stakeholder engagement, transparency, economic analysis, ethical debate, and cross-party political will. Unfortunately, until we are able to reach legal, political, and social consensus on these issues, the use of generative AI in entertainment content will continue to expose those playing in the AI sandbox to legal unknowns and risks.

Legal expert Susan Abramovitch and Professor Juan Noguera discuss the impacts of AI on media, arts, and entertainment.

We’ve Been Here Before. Or Have We?

By Juan Noguera, Professor, Rochester Institute of Technology

Throughout history, every major technological shift, from the invention of the printing press to the rise of the internet, has been met with a healthy dose of skepticism and trepidation. For example, the invention of photography was perceived by traditional painters as a threat to their livelihood, and the invention of the sewing machine stirred great anxiety among tailors.

Today’s rapidly evolving AI landscape has this same familiar spectrum of dread and excitement. As others in this debate rightfully point out, this technology has raised great legal and ethical quagmires, especially in the world of arts and entertainment. But as an educator and practitioner in the world of Industrial Design, I firmly believe this is not our first rodeo, and it won’t be our last. Embracing AI’s transformative potential and taking its challenges head-on is the only way forward.

Reframing Our View of AI

We may find some comfort in the historical context. When photography emerged, many feared the demise of painting, but the effect proved to be quite the opposite. Instead of making painters obsolete, photography freed them from the shackles of naturalistic documentation, birthing movements of high cultural impact like impressionism and cubism. In the same way, the sewing machine didn’t make tailors obsolete. It revolutionized the clothing industry, increasing efficiency and throughput, lowering the cost of quality clothing, and making it accessible to more people. More modern advancements in the world of design, such as CAD (Computer-Aided Drawing), 3D printing, and the internet, were most definitely disruptive, but ultimately they were integrated, optimized, and adopted as tools that merely complement and enhance human endeavors, not replace them. So, I pose the question: Why should artificial intelligence be any different?

AI tools like Large Language Models and image generators are increasingly ubiquitous and will inevitably become part of the software tools that artists, designers, and creators already use. Companies like Adobe are already integrating AI capabilities into their popular Photoshop and Illustrator software in a streamlined and nonchalant way. These tools allow designers to quickly perform tasks that were unthinkable just a year ago. As this process of refinement and adoption continues, AI will segue from being a novel, shocking development to just another tool in the design toolkit. The new car smell will fade away, and its practical utility will come to the fore.

In my own professional work, AI tools have been a fantastic companion and collaborator. For instance, when designing a new line of cookware, AI image generators helped me clarify abstract design objectives, guiding me towards an evocative design inspired by the water ripples of Lake Atitlán in my home, Guatemala. Most importantly, AI didn’t replace my insights or judgment, it just amplified my creative vision and streamlined my work.

We must acknowledge, however, that the AI-driven future holds legal ambiguities and great potential pitfalls, especially for the arts and design. There are many unanswered questions about intellectual property, the rights of artists and creators, and how we should handle accountability for AI-generated content in the future. It is my firm belief that while these questions are being ironed out, we have an immediate and pressing task at hand: to educate the next generation.

Using AI Responsibly

The more artificial intelligence becomes a fixture in arts and design, the more important it becomes to nurture an ethos of responsible and transparent use. Just like engineers and architects are trained to understand the strengths and limitations of the materials they build with, young designers must be equipped to harness the capabilities of AI tools ethically and responsibly. They must see AI not as a threat or panacea, but as a potent ally in their creative process.

Universities and other institutions must prioritize incorporating AI into their curriculum, not just as a discussion topic, but as a hands-on tool. Students should be able to see AI as a collaborative entity, one that can make their creative journey more efficient and richer, while also being fraught by its own challenges and pitfalls.

A common sense and ethical approach to AI education will ensure these students go into their industries armed with the ability to leverage AI’s strengths and navigate its issues. AI, with all its complexity and grandeur, is just the latest chapter in our ongoing tale of innovation. And if history serves us right, approaching it with foresight, responsibility, and an unwavering commitment to transparency and fair use should help us really reap the benefits of this technology.

AI Amplifies the Good and the Bad in Society

By Kentaro Toyama, Professor, University of Michigan

Years of research—first in technical artificial intelligence and more recently in understanding how digital technology affects society—have taught me one thing about the effects of technology. I call it the “Law of Amplification.” For the most part, technology’s impact is to amplify underlying human forces. Where human forces are positive and capable, technology improves outcomes, but where human forces are negative, indifferent, or dysfunctional, even the best technology doesn’t lead to good results.

Artificial intelligence is no exception. As we’re already seeing, ChatGPT helps honest writers brainstorm and helps bad students cheat. Deepfake technology can boost special effects in movies and it can generate misleading political content. Automated face recognition helps responsible police departments catch crime suspects and it misleads sloppy officials into arresting innocent people.

Predicting the Future of AI

The Law of Amplification also enables some predictions about the future impact of AI, as recent history has revealed the human forces underlying other technological advances. For example, we know from the U.S. manufacturing sector that those owning the means of production will work to replace human labor with state-of-the-art machines. We know from the golden age of digital technology that whatever the voiced intentions of tech proponents, innovations aren’t ultimately put toward decreasing inequality or “raising all boats.” We know from the polarized non-discourse on social media that more and more open channels for communication leads to hate and balkanization, not unity and empathy.

AI, with its immense power, will be an even greater amplifier of these underlying forces. Just for example, consider the plight of graphic illustrators. Technologies like DALL-E and Midjourney are already being used to generate a range of creative artwork for publications, both in print and online. Why should a company pay a five- or six-figure salary for work it can get done for $20 a month? The future is clear for illustrators—short of powerful pushback on the existing human forces that enable machines to replace people, AI will replace them. So, too, with a range of information, knowledge, and creative jobs, including actors, musicians, programmers, scientists, and writers.

Hope for the Best, Plan for the Worst

Skeptics of a mass job-loss scenario hang onto all kinds of misconceptions. Some, for example, smugly proclaim that generative AI output remains “derivative” or otherwise subpar. Just as the proverbial monkey at a typewriter could, in theory, punch out a Shakespearean sonnet, nothing fundamentally prevents a machine from being “truly” creative—and today’s AI is already far beyond monkeys. Others point out aspects of certain jobs that really need the human touch, or prattle on about AI augmenting people, not replacing them. But, augmentation immediately leads to replacement. If one person with an AI assist can do the work of 10 people, that’s 9 people a company won’t need. Still others parrot classical economics’ claims that new technology creates new jobs. Well, it seems even economists are rethinking those claims in light of a technology that will increasingly do everything a person can.

What can be done in response? A corollary to the Law of Amplification is not to seek solutions to problematic technology in technology itself. The problem is less the amplifying technology, as the underlying human forces. Those forces must be changed through law, culture, and social norms. The striking members of the Writers Guild of America have it right, at least in the short term—the problem is not the existence of the technology, per se, but whether those in power choose to replace human workers with technology. Without a human agreement, through law, contract, or outright technology bans, replacement will happen.

Longer term, it’s worth recognizing that never before have we, Homo sapiens, had to deal with an entity that can rival our intellect. AI is a singularity. And, it offers civilization a chance to rethink the prevailing social contract. If we can have machines do productive work for us, why does anyone need to work for a living?


If you enjoyed this article, please make sure to like, comment, and share below. You can also read more from our Political Pen Pals debates here

Susan Abramovitch
Partner and Head, Entertainment & Sports Law, Gowling WLG

Susan Abramovitch is one of the world’s leading entertainment lawyers and a Toronto-based Gowling WLG partner. As head of the Entertainment & Sports Law Group, her practice covers transactions and disputes in the music, film, television, videogame, branded entertainment, sports, e-sports, live theatre, fine arts, book publishing, metaverse/NFT, and AI industries. Susan is the program director for Osgoode Hall Law School's Continuing Legal Education Certificate in Entertainment Law and a frequent lecturer.

Matt Hervey
Head of Artificial Intelligence Law

Matt Hervey is a partner in the Intellectual Property team at Gowling WLG. He is an expert on emerging technology including artificial intelligence, digitalization, NFTs, and the "metaverse." Matt heads up Gowling WLG's Artificial Intelligence Law team, he is General Editor of The Law of Artificial Intelligence (Sweet & Maxwell), and is a fellow of the RSA for his work on AI and the law.

Juan Noguera
Assistant Professor, Rochester Institute of Technology

Juan Noguera is an Industrial Designer and Educator. He holds a Master of Industrial Design (MID) from the Rhode Island School of Design (RISD) and a Bachelor of Industrial Design (BID) from Universidad Rafael Landivar, in Guatemala city. As lead designer for Voxel8, he helped create the world’s first 3D Electronics printer. He is currently Assistant Professor of Industrial Design at the Rochester Institute of Technology.

Kentaro Toyama
Professor, School of Information at University of Michigan

Kentaro Toyama is W.K. Kellogg Professor of Community Information at the University of Michigan School of Information and a fellow of the Dalai Lama Center for Ethics and Transformative Values at MIT. He is the author of Geek Heresy: Rescuing Social Change from the Cult of Technology. Previously, he was a researcher at UC Berkeley and assistant managing director of Microsoft Research India, which he co-founded in 2005. He is also co-editor-in-chief of the journal Information Technologies and International Development.

Leave a Comment

%d bloggers like this: