Emma Stone Collaborates with AI Startup to Reimagine Animation Voice Work

Emma Stone joins forces with AI-driven animation studio to pioneer hybrid voice performances, combining human nuance with machine precision for next-gen animated storytelling.

Bryan Berlin, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons

Emma Stone is taking a bold step into artificial intelligence territory, partnering with a cutting-edge animation startup called AnimSense to explore hybrid voice performance techniques for animated films. This collaboration aims to blend human emotion and timing with AI-assisted voice modulation—opening new possibilities for character creation and voice acting in Hollywood.

AnimSense, founded three years ago by former Pixar engineers, has developed a platform that uses neural networks to analyze voice recordings frame‑by‑frame. The system can then generate subtle variations in tone, pitch, and phrasing: syncing the actor's core performance with enhancements such as multilayered expressions, emotional inflections, or language translation—all while preserving the original voice's authenticity.

Stone recorded dialogue for her first character via standard sessions, after which AnimSense’s AI model processed the audio to create alternate “flavors”—a whisper-soft version for intimate scenes, an amplified heroic tone for action sequences, and even foreign-language dubs that closely mimic her vocal presence. For one demo, her English line “I believe in you” was re-rendered in French with a convincing natural rhythm, rather than sounding like a synthetic translation.

In discussing the project, Stone praised its collaborative nature: “It’s not about replacing actors—it's about adding versatility. I still deliver the heart of the performance, but AnimSense helps take parts of it further.” According to company executives, this can shave 30–40% off recording sessions, while delivering multiple output versions from a single take.

The timing is significant: the voice acting and dubbing sector has remained relatively shielded from AI’s full impact, compared to VFX or scriptwriting. But AnimSense—and now Stone—are signaling that voice work is next in line. The technology could support smaller studios that can't afford multiple voice sessions, expand language reach without losing performance fidelity, and aid accessibility through customized audio tracks for viewers with hearing needs.

Industry response has been mixed. Voice actors welcome the efficiency gains but express concern over job dilution. Performers Guild leaders stress that any AI-assisted voice work must be consented to and compensated fairly, with transparent labeling. AnimSense assures that actors retain rights to both the original and AI-generated versions, with Stone signing onto a deal that guarantees her approval for each variant reproduced.

Stone’s involvement helps legitimize the technology and draw public attention. That’s a far cry from earlier celebrity AI endorsements that raised eyebrows—here, Stone actively shapes the process, working closely with the tech team to ensure the tool aligns with her vision. A studio executive commented: “Having Emma on board reminds us the goal is augmentation, not automation.”

This move follows recent studio-actor tensions over likeness rights, including SAG-AFTRA’s success in negotiating AI protections. Voice work now enters the spotlight: will AI become a creative assistant or an economic threat?

Emma Stone's AnimSense collaboration offers a middle path—a future where human performance remains central, enhanced with machine consistency across different contexts and languages. Whether this hybrid model will become standard remains to be seen—but with Stone lending her voice (literally) to the effort, the industry is listening.

Previous
Previous

Video Game Voice Actors Union Reach Major AI Protection Deal

Next
Next

Peak Bono: AI, VR and a Dad Sermon May Save Apple's Vision Pro