OpenAI Moves Closer to Hollywood’s Gatekeepers
Early March conversations with agencies signal a shift from technology rollout to rights negotiations and talent representation
OpenAI’s next move in Hollywood is not about releasing a new model. It is about who controls the people those models depend on.
In early March, discussions between OpenAI and major talent agencies—including CAA and WME—began to take shape around licensing frameworks tied to actors, writers, and creators, according to reporting in the Financial Times. While no formal deals have been announced, the conversations mark a notable shift away from product launches and toward infrastructure: how AI companies secure access to human identity, performance, and intellectual property in a way that can scale.
The distinction matters. Over the past year, generative AI companies have largely operated ahead of formal agreements with talent, relying instead on broad datasets and legal ambiguity. That approach is becoming harder to sustain as Hollywood’s agencies position themselves as intermediaries, not observers. If AI models are going to incorporate recognizable likenesses, voices, or stylistic signatures, agencies want to be involved at the point of negotiation.
For OpenAI, this is less about appeasement than inevitability. The company’s push into video through Sora—and its broader ambitions in multimodal media—means it is moving closer to the core assets of Hollywood: faces, performances, and storytelling formats. Unlike text, those assets are already tightly controlled through contracts, guild agreements, and representation structures.
Agencies see an opportunity to formalize that control.
Rather than opposing AI outright, they are exploring how to build licensing systems that treat talent as data providers with enforceable rights. In practice, that could mean negotiated access to likeness, revenue participation tied to AI-generated outputs, and contractual limits on how digital replicas are used. It is a model that mirrors existing entertainment economics, adapted for a different technological context.
The timing is not accidental. The backlash to unlicensed AI training—highlighted by artist coalitions and ongoing litigation—has created pressure on technology companies to demonstrate that future systems will operate differently. Engaging with agencies offers a path toward legitimacy, even if it complicates development.
Studios are watching closely.
While agencies represent talent, studios ultimately control production and distribution. Any licensing framework that emerges will need to integrate with existing deal structures, from above-the-line contracts to backend participation. The result is likely to be layered, not simple—another set of negotiations layered onto an already complex system.
There is also a strategic dimension. By engaging early, agencies can shape how AI is deployed rather than reacting after the fact. If they succeed, they position themselves not just as representatives of talent, but as gatekeepers of the data that fuels generative systems.
For OpenAI and its competitors, that introduces friction. But it may also provide stability.
Hollywood has never resisted technology entirely. It has absorbed it, regulated it, and monetized it. What is happening now looks less like a standoff and more like the early stages of integration—one negotiation at a time.