The Neural Net That Recreated ‘Blade Runner’ Has the Movie Stuck in Its Memory
Credit to Author: Marissa Clifford| Date: Sat, 05 Aug 2017 16:00:00 +0000
Artist and machine learning engineer Terence Broad’s Auto-Encoding Blade Runner is the project Philip K. Dick would have made if he were a scientist.
In his presentation at SIGGRAPH 2017, a computer graphics and animation conference, Broad detailed how he trained a Convolutional Autoencoder—a type of neural network—to recognize patterns of data in Blade Runner and then reconstruct it, scene by scene. What results is an eerily-accurate full-length film that was so convincing that Warner Bros. issued a DMCA takedown notice to Vimeo when Broad first uploaded the footage in 2016.
Since then, many speculated that Broad had found a loophole around copyright (it’s not). The real question behind Broad’s Blade Runner parallels the themes of Philip K Dick’s legendary novel Do Androids Dream of Electric Sheep: Where does one draw the line between human and machine, the real and the seemingly real?
“I think the thing to understand about neural networks is that we don’t really know how they work,” Ruth West, SIGGRAPH’s chair of art papers, told me in an interview. “They’re black boxes, and they make these leaps that are kind of like the leaps we make internally. That’s what I think is powerfully evocative about Auto-encoding Blade Runner—that synthetic leap that the neural network makes.”
In the case of Blade Runner: Auto-encoded, some of the leaps made it into the final version of the film. For example, Broad’s network couldn’t recognize black screens. Instead, it output an amalgam of green images, creating a sort of blown out palimpsest of images—or memories. Others could be massaged out through algorithms and repeated learning loops.
As Chrissie Iles, curator of “Dreamland,” an exhibition at New York City’s Whitney Museum of Art in which Blade Runner: Auto-encoded appeared last fall, wrote, Broad’s work relates to the “disembodied, post-humanized gaze, outsourced to machines.”
For Broad, that’s right on the money.
“For me personally,” Broad said during his presentation, “it’s an internal representation of a machine….We’re peeking into the gaze of an auto-encoder.”
In the future, Broad is dedicated to making work that exposes and critiques social bias, using technology not to perfectly mimic human behaviors, but to call them out. For him, it’s not so much about replicating humanity, but about humanity understanding the machine.
This is a theme replicated in the emerging art of generative cinema. By remixing existing cultural products, artists and scientists like Broad are able to critique both the medium and the message.
But what happens when the Blade Runner auto-encoder watches other films? Broad tried it out. When shown another Philip K. Dick adaptation, A Scanner Darkly, and a Soviet Classic, Man with a Movie Camera, it could still recognize the composition of the frames, but it essentially transposed the aesthetic of Blade Runner: Auto-encoded. They were dimly-lit, plagued by visual noise, and dreamy. The auto-encoded versions clearly came from the same memory.
Blade Runner: Auto-encoded is part of The Barbican’s exhibition Into the Unknown: A Journey Through Science Fiction up in London through September, and subsequently touring the world.
https://motherboard.vice.com/en_us/rss