Liza Minnelli and the Consent Case for AI

Why one artist’s embrace of AI complicates Hollywood’s moral line

Creative Management Associates (CMA), Public domain, via Wikimedia Commons

When Liza Minnelli released her first new song in more than a decade, the music itself wasn’t the story. The process was.

Minnelli publicly acknowledged that artificial intelligence tools were used in the song’s arrangement and production, and she didn’t hedge about it. The AI did not clone her voice. It did not generate lyrics. It did not perform in her place. Instead, it assisted with orchestration, refinement, and structural polish — tools she described as collaborators rather than substitutes.

That clarity is rare in Hollywood’s AI debate, and that’s precisely why it matters.

At a moment when much of the industry conversation is dominated by warnings about exploitation, theft, and unauthorized use, Minnelli offered a different framing: consensual, bounded AI use as creative assistance. Her position was not ideological. It was practical. She chose the tools. She approved their role. The authorship remained hers.

Hollywood has struggled to articulate that distinction. Too often, discussions about AI collapse into absolutes: innovation versus theft, progress versus erasure, machine versus human. Minnelli’s case suggests a narrower — and more uncomfortable — question: not whether AI should exist in creative work, but who gets to decide how it’s used, and under what conditions.

For studios and labels, this is challenging terrain. Consent-based AI undermines blanket opposition while still demanding strong guardrails. It implies contracts instead of bans, disclosure instead of denial, governance instead of moral panic. In other words, it requires work.

Minnelli’s stance also exposes an uneven reality inside the industry. Established artists with control over their catalogs, identities, and legal representation can experiment safely. They can say yes on their own terms. Emerging artists, by contrast, often lack the leverage to make refusal meaningful — or consent meaningful either.

That imbalance is where the ethical fault line actually runs. AI is not inherently exploitative, but power asymmetry is. Minnelli’s decision doesn’t negate the concerns raised by actors, writers, and musicians who fear being replaced or absorbed without permission. It sharpens them.

Studios are watching closely. If high-profile artists publicly embrace certain AI applications, it becomes harder to argue that all uses are unethical by default. The fight shifts from technology itself to governance: what’s allowed, what’s disclosed, and who benefits.

Minnelli’s choice doesn’t settle the debate. It complicates it — in a way Hollywood badly needs.

The future of AI in entertainment may not hinge on whether artists say no. It may hinge on whether the industry is willing to build systems where saying yes is genuinely voluntary, clearly bounded, and fairly compensated.

That’s a harder problem than banning a tool. And it’s one Hollywood has barely begun to solve.

Next
Next

Adobe’s Hollywood Strategy Comes Into Focus