Weta Digital’s Joe Letteri talks about whether Andy Serkis deserves acting awards, and why the VFX industry is in trouble in the US
In 1968, the original “Planet of the Apes” encased its actors in barely movable latex prosthetics and still managed to become some kind of classic. But things are very different 46 years later in the new “Dawn of the Planet of the Apes,” the sequel to 2011’s “Rise of the Planet of the Apes” and the first big-budget movie from “Let Me In” and “Cloverfield” director Matt Reeves.
“Dawn” is a showcase for Reeves’ sensibility, a summer blockbuster-to-be that is darker and more thoughtful than most of its multiplex competition. But more than anything else, it’s a state-of-the-art display of performance-capture acting from Andy Serkis, who plays the ape leader Caesar, and for Peter Jackson‘s Weta Digital, which turned Serkis and a host of other actors into entirely convincing chimpanzees, gorillas and orangutans.
Beginning with a closeup of Caesar’s eyes before it pulls back to reveal a troubled world in which apes are in the ascendant and humans are struggling for survival, “Dawn” takes the remarkable effects from “Rise” and super-sizes them. At times, watching it, you have to wonder: Is there anything that visual effects artists can’t do these days?
Joe Letteri, senior visual effects supervisor at the New Zealand-based Weta Digital, answered that question, and a number of others, in a recent conversation with TheWrap. Among the topics addressed by the eight-time Oscar nominee and four-time winner (for “The Lord of the Rings: The Two Towers” and “The Return of the King,” “Avatar” and “King Kong”): how Weta turned Serkis into a convincing super chimpanzee, whether there should be a new award for performance-capture acting and why the U.S. visual effects industry is in trouble at a time when the field is so vibrant creatively.
This time around, the scale is different — it really is a planet of the apes.
Yeah, it really is. That was the transition that had to be made: Matt wanted to open this movie in the ape world, and establish that 10 years from the end of the last film, humans have kind of disappeared from the scene, and the apes are evolving. They’ve got this almost kind of Stone Age community, and it meant establishing the first 20 minutes of the film in that environment.
How did that change things for your team?
To get that realism, Matt wanted to shoot it on location. On the previous “Apes,” we had figured out how to take all the performance-capture gear and use it on a soundstage. And we managed to bring it outdoors a little bit as well, but in a pretty controlled setting.
On this one, it was way out on location, and it was wet and rain/”>rainy. The gear is pretty sensitive, so we had to kind of reinvent it all. New microprocessors and new wireless systems, things to make it more robust. And for every shot, we had to hide around 30 mo-cap cameras around the set, because we were capturing anywhere from one to 10 actors and we wanted the performances to come through in the final shots.
You mentioned that it was wet and rain/”>rainy. There’s a lot of wet fur in this movie.
Yeah. Once you get into the actual creation of the apes, there were a lot of new techniques that we had to come up with. You get that with that first, brilliant shot. We’re close on Caesar’s eyes, in there with his thoughts, and then you pull away and reveal the ape world for the first time.
We had to build all this detail in the eyes to handle that extreme closeup. There’s the layers of fibers that happen in the iris, for example. It’s all hand-modeled now, and hand-painted. And then we worked out everything else that we needed for the wetness of the eyes, and the micro movements of the skin, and all the skin-texture detail.
And then the fur. Because you see the fur so close up being pelted with rain/”>rain, we needed new fur software that would do the right thing when the rain/”>rain hits it, or when they’re rolling around in the mud or fighting with each other, or even just touching each other or touching the humans.
There was a lot of new technology that went into making the film.
To an outsider, it looks like we’ve reached a point where anything is possible. Are there ever times when you want to do something and you just don’t have the capability to accomplish it?
No. We always have ways to get the job done, but we’re always looking for ways to do it better and more accurately and faster.
For one thing, with so many outdoor environments, we have to use a lighting technique called global illumination. Rather than just hitting all the apes with individual light sources, like CG has traditionally done, we need to light from the entire environment – from the sky, from whatever’s bouncing around. We developed a technique for “Avatar” called spherical harmonic lighting to do that. We won a Sci-Tech award for it this year, which was great.
But it had one specific weakness, which is that when you tried to use it with fur it was extremely complicated. So we have been working on a new renderer that uses a different technique called path tracing to actually trace rays through each individual hair. It will hit the hair, go through to the next one, the next one, and it does this hundreds of millions of times for each frame.
The scene where the doors open on the human colony and Caesar’s out there with his army, we had about 1,200 apes. We used the new renderer for that, because we needed that kind of compute power to show all those apes.
I saw a heated debate the other day on Hollywood Elsewhere about whether performance-capture acting is truly awards-worthy in the traditional acting categories. There was disagreement between people who said that what VFX artists do is like what a makeup artist would have done in the past, and others who said the visual effects team makes creative decisions that can change the performance.
Sure. I know that Andy has used that metaphor of digital makeup before, but I think that he was just trying to explain it to an audience that was not technically very savvy. The difference is that makeup is passive. And the more makeup you put on, the more it actually deadens the performance. Where we sometimes need to enhance the performance.
So yes, we do make those sorts of translations all the time. For example, chimps have really deep-set eyes, and deep brows. So when you see a facial expression on an actor, you can clearly read what the eyes are doing and what the brows are doing. But on a chimp, it’s all set back a little bit more, it’s a little bit harder to read. So you can’t do an exact one-to-one. Sometimes we have to exaggerate it so it reads in camera.
Body language is another obvious one, because chimps have different body proportions. So you might have an actor getting down on all fours with arm extensions, but their hips are going to be in a different place. Humans still have longer legs to torsos than chimps do, and you can’t completely get away from that.
So what you have to do as animators is rebalance everything and get the chimp’s legs underneath in the right place so the center of gravity is correct. That changes the hip position, which changes the shoulders, which changes the neck, which means now the head and eyes are looking in a different place. Well, you need to get the eyes looking in the same place, so now you have to back everything out again to get the body looking like it’s still feeling the same.
There are a lot of creative decisions that are made, but most of the time you’re not trying to author a new performance. What you’re trying to do is bring the performance of the actor to life on the screen — so that if you see it side-by-side, you think you’re reading the same thing. It very much is a collaborative effort in that regard.
If it’s a collaboration, should there be a new category of award for motion-capture performances? One that would go jointly to the performer and the effects team?
I think that’s a hard one to dissect. First off, there are no rules that say a performer like Andy cannot be nominated in any acting category. And if you look at what someone like Andy is doing, it is acting. The difference is that the performance you see is created at a different time than when the performance was recorded.
In traditional acting you perform it, it’s photographed, and it all happens at the same time. This is a new thing. This is messing with human perception in a way that could never have been done before we created this technology. And that’s causing some people to wonder about which part of it is the performance and which part is the visual effects team.
But how do you separate that? There are so many parts to the visual effects team. Which part is the performance? Is it the animation, or the way the muscle dynamics are doing the right thing, or the fur, or the software that makes the eyes look the way they’re supposed to look? It’s hard to say, “Here’s the performer and here’s the team who did it.” I’m kind of favoring, keep it simple: If it’s about the acting, it goes into the acting category.
Effects companies are going bankrupt or moving out of the United States, VFX artists protested at the last Oscars … This should is a boom time creatively in the field — why are we hearing about so many U.S. companies in trouble?
Well, there are two issues. One is tax incentives. It is a very high profile industry, and very high tech. so a lot of governments have decided that they want to draw this kind of work to them.
There is actually a fairly vibrant industry in the United States, but a lot of it is being pulled elsewhere. I mean, look at London — London has been built up incredibly over the last number of years with the concentration of visual-effects houses and films that have been shooting there. It is driven, again, by a conscious decision by the government to say, “We want this business here.” And they chose to incubate it.
The other issue, though, is that this work requires a lot of R&D, which is expensive. Like the physically-based renderer that I was talking about — we’ve been working on that for three years. We knew that visual effects were going to evolve to the point that we were going to need it, but we didn’t know specifically that it was going to ready in time for “Planet of the Apes.”
You have to be able to make those long-term guesses and sustain the innovation through a number of years, so that it’s ready when a show like “Apes” comes along. So that you have the right software and the right technology and the right mindset and train/”>rained artists to pull that together. It’s not an easy balance to maintain.