For film and TV studios, the pursuit of AI workflows in content production requires an intricate risk analysis.
Legal and rights clearance advisers are now front and center in defining production best practices to prevent or limit liability, including conducting due diligence on off-the-shelf AI models or tools and approving or restricting specific ways in which they can be used.
Case in point: In September, Netflix published its guidance on using AI in content production, providing more granular use cases based on whether they’re “low risk” or need to be escalated for rights clearance approval.
“Media and entertainment companies want to know how they can use it and what are the risks of different approaches. There’s a spectrum of risk of adopting generative AI as part of the creative process, so it’s a question of risk tolerance and well-informed deployment,” said Josh Weigensberg, IP litigation partner at Pryor Cashman.

Legal uncertainty surrounding AI training data has made this analysis speculative overall. Uncertainty means risk, which producers are seeking methods to minimize.
Even AI-hopeful major studios still restrict the use of any generative AI for most production use cases in film and TV production. The complexity of training licenses and rights clearances for AI derivatives intended for new commercial contexts has meant significant constraints on production use beyond two main low-risk use case categories:
- “Temporary” material confined to previsualization stages, where the use case is often described as “ideation,” because studios don’t need rights to content that won’t be copyrighted or distributed
- Post-production “assistive” AI, where an AI system is only modifying existing footage, such as de-aging, AI-enabled lip sync for content localization or “reshoots” with performer consent
Among the criteria in this analysis is understanding the data used to train any generative model that then produces material in a new work.
Studios are reportedly considering safer training approaches in an effort to limit copyright-infringement liability risks that stem from models trained on scraped data. These include using licensed data models and fine-tuning models on owned IP.
A handful of developers — Adobe, Getty Images, Shutterstock, Moonvalley and Bria among them — have built “ethical,” “clean” or “commercially safe” AI models, exclusively trained on licensed or purchased data with contributor consent and some form of remuneration. Meanwhile, model fine-tuning refers to training a base (often foundation) model on owned or original creative assets, such as franchise- or project-specific material.
Yet even using a licensed or fine-tuned model isn’t necessarily a cure-all mollifying potential exposure from third parties that could arise from AI material appearing in a work intended for commercial distribution.
In short, this is because studios need gen AI outputs to be explicitly licensed or consented to clear them for use.
“Studios all hit the same limitation. It’s not that gen AI can’t create output but that they can’t sell that to anyone because they don’t have a clear chain of title with all required licenses,” said Scott Mann, co-CEO at Flawless. “No producer would put out a TV show with a piece of music in it they didn’t clear it for use. It’s the same with gen AI. If you don’t have the license and consent for a piece of footage, it becomes very hard to do. Producers typically wouldn’t take that risk because they have to warrant they have the license to sell it.”
For generative AI to be maximally safe to create production assets in a finished work, a model developer would need to have explicitly licensed specific use cases from anyone who contributed training data before the developer can then pass that use case on to the user, with a warranty that outputs are cleared for use. But model licenses seldom disclose what specific use cases may or may not have been permitted in training rights licenses.
Similarly, even if a studio has the right to use movies or shows it owns to train a model, it doesn’t have the right to use that model to create derivatives of likenesses or other creative elements contained in those works, which had been cleared for use only in association with that work by actors, directors, artists or other third parties.
In effect, this means AI developers and studios alike would theoretically need to secure new licenses from those participants before they could use a model trained on their works to create AI derivatives and subsequently use those derivatives in any new work.
“It really comes down to if you’re going to distribute or license something, you need the consent from the contributor that says you can use it for this use case,” said Mann. “That’s why there are very few companies that are actually in the space of delivering actual final outputs that can be utilized in final things.”