Best Artificial Intelligence Applications in Media
Artificial intelligence (AI) has generated enormous hype in recent months, but opinions about the real impact of technologies like machine translation, autonomous vehicles, natural language processing, and computer vision vary widely. Optimists envision a world in which “driver” becomes a superfluous word; pessimists worry about the millions of truckers at risk of unemployment.
But while often described with the futuristic fancy of Skynet in the “Terminator” movies, AI has very real near-term applications for the entertainment industry. Here are some of the most exciting examples.
AI is poised to touch every facet of entertainment, even that most time-honored of parental traditions: reading stories to children before bed. Companies such as Novel Effect produce interactive voice content that summons a menagerie of sound effects to accompany a parent’s narration of “Where the Wild Things Are,” for example.
Looking forward, interactive hologram technology provided by the likes of 8i could see those same children playing with interactive Seussian characters (full disclosure: The author’s company is an investor in 8i). After the kids are in bed, Mom can relax with an interactive audio story she plays with her voice thanks to Earplay, which is building hands-free and eyes-free choose-your-own-adventure experiences for mobile devices. Think voice-interactive audio books.
AI could also transform the place that historically had a monopoly on blurring the lines between reality and childhood fantasy: Disneyland. When it comes to theme parks, Disney has always been ahead of the curve. Its Disney Research division works closely with Theme Parks’ Imagineering teams to produce the rides and attractions we know and love. What about using technology from Zippy.ai to have a semi-autonomous R2D2 droid in Disneyland’s upcoming Star Wars Land? With sidewalk-safe autonomous navigation, it could basically be a self-driving robot that’s safe around kids and families. Or how about bringing Disney characters to life through deeper levels of interactivity than just silent ‘cast members’ in character suits? Why shouldn’t Mickey be able to talk and respond to guests’ questions in his recognizable, chuckling voice?
AI won’t just respond to the voices of the entertained, however, but dazzle with voices it borrows from celebrities, courtesy of Lyrebird. The company enables voices to be mimicked from the perspective of pitch, tone, and modulation. You could type something and it could be spoken by Taylor Swift, sounding just like her, for example. It’s not hard to imagine the same technology taking root in smart assistants; who wouldn’t love having their queries about the weather answered by the smooth timbres of Morgan Freeman, or their favorite podcast host?
Instead of reading The New York Times, you could listen to Anderson Cooper narrate the news and ask a bunch of questions that would be answered by an underlying Siri or Alexa brain, but spoken by virtual Anderson. Distribution of Alexa-enabled Echo devices, Google Home, and Apple’s upcoming HomePod — with close to 20 million devices in North America this year and estimates of 2020 penetration reaching as high as 140 million — means that enough people already have access to voice-interactive AI. The great thing is, the more people use these devices, the smarter the underlying AI becomes. Every question asked or command given makes it smarter, and trains the underlying algorithms.
Celebrity mimicry won’t be limited to voice. A digital Barack Obama produced by the University of Washington’s Paul G. Allen School of Computer Science & Engineering has introduced the notion of completely convincing fake video of other people. There’s a darker element to this innovation, though—a politically polarized public already hooked on fake news and self-affirming media could be plied with false videos of their ideological nemeses. Consider an election season in which fabricated video of Hillary Clinton espousing support for disgraced producer Harvey Weinstein emerged. It’s possible to type out some dialogue, and then layer in voices, facial animation, and body language to replicate a realistic and believable delivery of said dialogue. Nobody I’ve shown the digital Obama to was able to recognize that it wasn’t a real speech actually given by President Obama himself.
If that idea makes you want to escape into another universe, AI will also be able to help you do that: applying the technology to virtual reality could make almost every aspect of it more immersive and believable. Assuming you didn’t want to escape into a post-apocalyptic world free of people, the virtual characters—friend or foe—inhabiting your VR world will be multidimensional and unscripted. Rival Theory has already made strides in this space, with its Rain AI engine being used by more than 100,000 game developers around the world. Limitlesswowed attendees at the 2016 Game Developers Conference with a short film featuring an interactive VR character — Gary the Gull — built using their platform. The SIGGRAPH conference, too, has recently become a magnet for these kinds of advances.
Storylines themselves also have the potential to become significantly more complex thanks to AI advances; neural nets trained on VR participants’ experiences with existing storylines could generate many more that are tailored to each player’s tastes. Massive Software, which originally cut its teeth populating Peter Jackson’s Middle Earth with armies of snarling orcs, has already added AI to its crowd simulation capacities. The digital effects company’s Ready to Run Agents — prefabricated AI agents that can be dropped into scenes and tailored to the story’s details by visual artists — chops down the time required to generate CGI characters.
Animation hasn’t escaped AI’s crosshairs, either, as Midas Touch Interactive has created a tool that automates the process for animating 2D characters. It should be taken seriously, as the wizard behind the curtain is a Pixar veteran who previously brought AI to life through his work on WALL-E. Pixar itself hasn’t missed the AI boom altogether, leveraging deep-learning techniques trained on “Finding Dory” to detect and iron out grainy frames in its productions. Outside the studio, last year Google leaned on Pixar’s storytelling bona fides to punch up its AI’s sense of humor.
AI advances won’t just expand visual and interactive horizons, though. They will also allow a much greater range of content-targeting and understanding of each customer’s tastes or even moods. Beyond more sophistication in the kind of recommendation algorithms that Netflix uses to keep you hooked, emotion- and facial-recognition technology will allow content providers to select what you see based on how you’re feeling. The iPhone X’s front-facing camera uses computer vision to unlock the phone through its ability to “see” the owner’s face. Just think of the level of ad targeting Apple could enable by tracking your pupils across the screen as you’re viewing content on your phone. It could know which part of the screen you’re looking at and your emotional state/engagement as you watch. Nielsen, the media ratings agency, could be made obsolete in a few years by innovations such as this.
There are companies already spearheading these advances outside Apple: TVision Insightsalready measures the attention audiences pay to TV content by analyzing “actual eyes on screen.” Affectiva is marrying computer vision and deep learning to determine emotions from nonverbal cues and facial expressions. The videogame company Flying Mollusk Studio has already used Affectiva’s software to produce a psychological thriller game whose difficulty changes with the player’s level of fear. Taken a step further, smart assistants could learn what music you like when you’re feeling sad, happy, or energetic, and a content request could become as simple as, “Alexa, play me something soothing.”
So there you have it: The story of AI in entertainment will be one of better interaction, imitation, and insight. Every time you play with, talk to, are observed by, or make a decision through an AI-enhanced device, it will get better at understanding how to amuse you with its actions or content selections. Are you not entertained?
Sunny Dhillon is a founding partner at Signia Venture Partners, an early-stage venture capital fund based in San Francisco, where he invests in tech startups. He was previously the first business development employee at a venture-backed spinoff of New Line Cinema and a founder of one of the App Store’s first mobile location-based apps. He was also an investment banker at Rothschild in London and part of the corporate strategy team at Warner Bros. before starting Signia Ventures in 2012. He invests in AI, virtual/augmented reality, blockchain, media, and enterprise software startups.
Special thanks to Alex Lloyd George, Signia Ventures summer intern, for his help on this article.