Meta Digital
Meta Digital Pakistan is the entertainment industry's only organization representing the full breadth of visual effects practitioners including artists, technologists, model makers, educators, studio leaders, supervisors, PR/marketing specialists and producers in all areas of entertainment from film, television and commercials to music videos and games.
Saturday, July 20, 2013
Renderman, Happy Birthday! By Mike Seymour
Pixar Animation Studios today announced a celebration of 25 years of Pixar’s RenderMan and its pioneering contributions to computer graphics, which completely revolutionized the feature film industry and continues to do so. Used in 19 of the last 21 Academy Award winners for Visual Effects, RenderMan is the industry standard and has been honored with multiple awards for technological innovation by the Academy of Motion Picture Arts and Sciences.
Originally based on the REYES rendering architecture and the RenderMan Shading Language (RSL), RenderMan has developed into a hybrid renderer possessing the most diverse and flexible toolset currently available. This year, Pixar’s “Monsters University” and “The Blue Umbrella” are showcasing new levels of photorealism, including major advancements in lighting that are directly attributable to technological breakthroughs in RenderMan’s system for creating physically-based global illumination. The upcoming release of RenderMan Pro Server 18.0 builds on this rich new feature set to deliver path tracing, accelerated re-rendering, and geometric area lights, as once again RenderMan evolves to meet the ever-increasing challenges of visual effects and animation production.
Pixar is initiating a number of celebratory events and programs to commemorate RenderMan’s 25th anniversary, commencing at the Annecy International Animation Festival and Market (Mifa) in June 2013 with an event including a behind-the-scenes presentation highlighting the state of the art rendering techniques used in Disney Pixar’s “Monsters University” and “The Blue Umbrella.” In particular this presentation will demonstrate RenderMan’s exciting new cinematographic tools for lighting and other key features such as RenderMan’s Physically Plausible Shading system. Event attendees will also qualify for special promotional access to RenderMan software and RenderMan On Demand, the world’s largest render farm. Throughout 2013, a series of additional special events will explore how RenderMan is continuing to change the way in which movies are being made and how Pixar’s RenderMan is now accessible to everyone.
Pixar will also be at Siggraph 2013 in California, where fxguide will provide even more coverage of this great technical landmark.
Source: http://www.fxguide.com/quicktakes/renderman-happy-birthday/
Friday, July 19, 2013
Man of Steel vfx milestones By Ian Failes
“Zack Snyder wanted Man of Steel to appear very natural
because there’s some very fantastical things in there and he wanted
people to suspend their disbelief, and we the visual effects team had to
make it as easy as possible for them to do so.” So recounts overall
visual effects supervisor John ‘DJ’ Desjardin on the philosophy behind Man of Steel’s visual style.
Desjardin notes that the intent was to shoot a more handheld (the DOP was Amir Mokri) and documentary-style film than previous outings in this comic book character’s ‘verse. “We had to think about what that would mean since we also had to photograph some crazy action,” says Desjardin. “So for a lot of the previs we did, we’d start to think where our cameras were and where our cameraman was. A lot of the rules are the Battlestar Galactica rules for the space cams that Garry Hurtzel developed for that mini-series where we want to make sure if we’re translating the camera at all it makes sense. Unless the action is so over the top, like in the end where Superman is beating up Zod – we had to break it a bit.”
fxguide talks to the major players responsible for bringing to life the visual effects of Man of Steel: overall supervisor John ‘DJ’ Desjardin, and Weta Digital, MPC and Double Negative. With so much work in the film, we delve down into just three of the many tech accomplishments:
1. The tech of Krypton
2. Live action and CG takeovers: the Smallville confrontation
3. Destroying a city: the invasion of Metropolis
And we also take a look at PLF’s previs work, Scanline’s tornado and oil rig effects and Look Effects’ work on the bus crash.
In creating the liquid geo which took the form of anything from wide planet views, x-rays, displays on floating robots and even to depict Jor-El communicating with his wife Lara; Weta Digital took these steps:
1. The look – The beads, which up close would appear to be pyramids with a slight bevel, were designed to create a surface of the object they were depicting inside some kind of console. “Essentially we would have the normals of the objects that we were targeting provide a simulation with an orientation that one of the most dominant sides of the pyramid would align with,” explains Weta Digital lead FX TD Brian Goodwin.
- See how Weta Digital created the liquid geo and History of Krypton sequences, thanks to our media partners at WIRED.
2. Modeling and animation – The models used for animation ranged from purpose-built (Lara’s face) to ones appearing in grander scenes (such as approaching scout ships). Says Goodwin: “We had to develop a pipeline to bring in assets, so instead of going through the route of reducing the polygon count to something usable what we would then do – you would take the model in whatever way it was made and just scatter discrete points onto it, and extract the matrix onto the animation and copy these points onto the matrix and have these sparse points behaving in a way that the model would.”
“We had animation provide us with geometry that we would then track beads on,” adds Goodwin. “Those beads would then be turned active in front of the actual console and the console would decide what beads it needed to provide to actually draw particles from the actual earth we described or this invisible bowl.”
3. Simulation – After animation, artists ‘copied’ little beads onto the animated geometry for a pre-sim’d lighting version to get approval on how the object would read. Sims were then run “on all the targets which would be discrete beads floating around on top of the surface which would have its own set of parameters,” says Goodwin. “The bead size or the turbulence that would crawl along the surface constantly updating the orientation was based on the normal provided by the surface. That was then saved to disk and we would use that sim as the final target for the simulation.”
“The console was like a cup turned to the side,” adds Goodwin, “so whereas gravity would be Y pulling ‘down’, in this case the Y is facing into the back of the ‘cup’ console, ” which means essentially gravity is pulling from the back of the cup towards the actors watching the display. The beads then fall (after they pass some threshold) – towards the inside of the geometry. It is as if one poured the beads into a glass bowl, all filmed from below the bowl looking up, – except everything is turned on its side, so the beads fall towards the viewer, and the geometry of say a planet Earth is a hollow glass bowl between the sea of original beads and the viewer. “We sort of reversed it by having gravity faced towards the front of the membrane which would be the meniscus and the surface of the actual water was the front of the console, so we would have a constant force pushing towards the front which would give us a sense of a flat surface with water traveling around on it, but this surface was never rendered.”
4. Noise – After simulation, Weta Digital ran every bead through a temporal filter to remove jitter. ”Even with the highest RenderMan settings we would still face a lot of noise, and that led us to taking out all the noise within the simulation,” continues Goodwin. “Even the most subtle twist of the bead half a degree, 2 degrees, would, because of being mostly specular, would result in seeing a completely different point within the IBL (Image Based Lighting) and that would create a tremendous amount of variation between the slightest bit of movement. By filtering it, it softened the whole piece out, to the point of sometimes we needed to get a little bit of grittiness back in because smoothing out the beads too far would look too boring.”
The team could control the flow from back to front and back again. “We allow the simulation, to go through a series of noise fields to make it a little bit more interesting and then join the target,” says Goodwin. “Then once it’s joined the target it would essentially no longer be registered in the simulation. Once the target appears we release it and it finds its way back into the simulation, by using the opposite force – we essentially use level sets to create some sort of pressure and it would know when it was inside or outside this world and a bunch of rules that would dictate whether it was allowed to be outside.”
5. Lighting and rendering – Lighting solutions were from taken the set. For the consoles, Weta moved to the next level of RenderMan to take advantage of improved raytracing and instancing objects. Motion blur was also a particular challenge. “We had the traditional motion blur – in that our particles do technically move,” says Goodwin. “We did a test where we rendered the objects and we would compare the motion blur represented from the object’s motion blur literally straight from animation and we would line that up with the render we would get out of the beads. In some cases we would have the vectors that were provided to the renderer shortened ever so slightly, so that we would have, as the beads form a target across two frames, would result in spikes within the motion. You’d have a bead that travels across the length of the frame and you’d have a long streaky specular highlight – ultimately it’s shading at one end and smudging it and in that case we’d shorten the motion blur so it wouldn’t create bright little spikes.”
The history lesson bead shots were created slightly differently and without an underlying sim. “In addition to working out all the technical aspects, just figuring out aesthetically and creatively what actual images we would use to tell the story of Krypton was really important,” says Lemmon. “We did quite a lot of concept art based on various sculptures. We looked at bas-relief from the Rockefeller Center, we looked at Greco-Roman references and explored those kind of aesthetic looks but applied to spaceships and alien planets and alien technology – if you were depicting a sci-fi world through the medium of stone sculpture, what that might look like.”
Weta Digital had originally planned to do these shots with the liquid geo simulation engine as well, but ultimately the look required was a different one. “It’s more of a relief style,” says Goodwin. “It actually looks like the space that each object exists within looks like it exists within a world that’s flattened. The idea was that it went from an idea of being a simulation forming things to being a relief. If you look closely the background is flowing with the text and graphics – the beads travel along it – but outside of that, things weren’t actually moving to and fro.”
The action for the history lesson was animated based on greenscreen performances by Russell Crowe and Henry Cavill in what became, according to Goodwin, ‘humongous spaces’. “It was traveling hundreds of thousands of meters in a Maya scene, then we sent that through a projection,” says Goodwin. “It was all animated in world space and would then send that through a transformation which we would then project onto a back wall and relief it. We needed to represent all the information in a confined space.”
The liquid geo shots on Krypton occur while the planet is both under siege from Zod’s crew and as it becomes unstable and, ultimately, implodes. Before that happens, wide views of Krypton depict an alien atmosphere, which are mostly Weta Digital environments, spacecraft and creatures.
One shot of Jor-El riding a winged creature made use of a buck and gimbal set up to replicate the move fashioned in previs. “We shot elements of Russell Crowe in his flying costume on that buck,” explains Lemmon, “and using previs as a guide tried to match both camera and the movement of the gimbal to the previs. Then we put those elements onto a digital creature and a digital world. But of course some of the stuff that moved in such a way that it wasn’t possible to get those movements. There were shots where we transitioned in and out – we go all digital – then Russell Crowe for just five frames or so, and then back to a digital character.”
For some dramatic shots of Zod’s ships approaching the house of El, Weta Digital referenced scenes from Apocalypse Now. “Zod flies in these attack ships and his descent on The House of El was modeled on the Rise of the Valkyries sequence,” says Lemmon. There’s actually a shot that everybody thinks is in Apocalypse Now, but isn’t, it’s in the Apocalypse Now poster that – the sun shot – the ships flying out of the sun.”
Suits of armour worn by characters on Krypton were mostly CG additions to the actors wearing gray suits with tracking markers (although female characters wore practical armor on set). The tracking of these shots is therefore particularly complex to match all the movements of actors sometimes engaged in hand to hand combat such as Zod’s attack after Superman’s pod is launched from Krypton.
A phaser battle contained a specific look for blasts with plasma residue. “Those were mostly Houdini simulations,” notes Lemmon. “We wanted to avoid a straight laser beam and do something that had a little bit more interest in it. The idea was the beam moves through the air and charges and ionizes particles in the air. In Krypton there’s particles that float in the air the same way that dust does here, but we treated them as if they were heavier and got more excited by the beam. As that the beam moves through the air it glows and starts to swim a little bit and leave that residue, particularly when it hits somebody.”
Aerial battle shots employed Krypton’s hazy environment and shafts of light through rock pillars to add depth. “In busy sequences like that it’s important to compose things so that you can actually see what’s going on and see who’s good and bad and who’s winning,” states Lemmon. “One thing that drives me nuts in big action sequences is when you can’t actually – it’s just noise – and you can’t see what’s going on.”
Later, as the planet begins to destroy itself, the studio worked to show various angles of the destruction including a ‘from space’ view. Lemmon says he enjoyed “figuring out how it would look – playing it out as a geo-thermal event that’s influenced by the planet’s magnetic field, and maybe have it collapse along the equator rather than blow out spherically and implode first then explode afterwards. Playing around with those ideas was a lot of fun.”
Desjardin explains: “When we do these fights and these hyper-real things, we don’t want to do the traditional, ‘OK I’m a cameraman, I’m shooting a clean plate, I’m going to pan over here to follow the action that’s not really there yet but we’ll put the action in later. Because that’s us animating the characters to the camera. So we would do that animation with the characters – grappling, punching or flying away – and we would take the real guys up until the point until they were supposed to do that and we’d cut. Then we’d put an environment camera there and take the environment. And then a camera for reference of the actors and get each moment. So then we had a set of hi-res stills for the environment and the characters. Then in post we take the digi-doubles and animate them according to the speeds we want them to move in our digital environment.”
This approach was pioneered for the Smallville encounter in which Superman confronts Zod and his crew after they have threatened Martha Kent. They fight on the streets of the town and are further attacked by the military via A-10s and ground assault troops. MPC handled visual effects for this sequence (in addition to many other shots in the film ranging from Arctic scenes to shots in the upper atmosphere when Louis and Superman are taken to the Black Zero).
In order for the seamless takeovers to occur – and for the shot to continue with a pan, tilt or other movie – a new capture and post-production process was proposed by MPC visual effects supervisor Guillaume Rocheron, in conjunction with Desjardin. Here’s how it broke down for a typical Smallville shot:
1. The shot would be previs’d and particular fight choreography for the fights established by stunt coordinator Damon Caro.
2. Knowing from the previs the shot that was required, live action portions of the scene would be filmed in little pieces. “If say Superman was being punched and would land 50 meters away, we would shoot our start position and end position, and then bridge that gap with the CG takeovers,” says Rocheron.
3. A camera rig dubbed the ‘Shandy-cam’ (named after on-set VFX coordinator Shandy Lashley) obtained keyframes of the actor. “It’s a six still camera rig that’s built on a pipe rig so that you can run it in at the end of a setup and get stills of keyframes of a performance or an expression,” says Desjardin, “and then we could use those hi-res stills to project onto the CG double and get really accurate transition lighting and color – right from the set.”
4. On set, another camera rig was also used to capture the environment. “We ended up calling it Enviro-cam,” notes Rocheron. “It was a rig where we mount a Canon 5D and a motorized nodal head, and that allows us to capture to full 360 environments at 55K resolution for every single shot. The capture time is very quick – we were taking between 2 and 4 minutes for every shot, so it was really easy – the same way we capture HDRIs.”
“The sets are there so why not capture them?,” says Rocheron. “It basically allows you to film what is not filmable. Here there’s no cuts with no interruption. We also did a lot of entirely digital shots which had no live action. So we had our Enviro-cam, so we used that to capture the environment rather than a plate and we could put our CG characters in there.”
The set capture resulted in lighting and textures that could be re-projected onto geometry (the sets were also LIDAR’d to aid in reconstruction). “We wrote a little pipeline in Nuke that allowed us to stitch all the photos together and then very simply calibrate them with the Smallville geometry,” says Rocheron. “We would calibrate just one angle, because for the full dome, all the photos would get automatically calibrated on the geometry. For us it was a very good process – it wasn’t just a sphere. Everything was re-projected in two and a half D on the geometry to get parallax and the camera would travel technically in all directions.”
Superman is also seen in several sequences, of course, flying though clouds. “We used volumetric clouds,” says Rocheron, “using an internal tool we have for clouds to mobilize geometry and transform it into volumes and refine with layers of advection and noises for the fine details.” In terms of environments Superman flies through, such as over the Arctic circle, over canyons in Utah, over Africa, and over the Dover cliffs, MPC developed these first in Terragen and then took them through to matte paintings and geometry.
5. Full-screen digi-doubles were of course a major component. MPC led the digi-double Superman, Zod, Faora and other Kryptonian creations which were shared with other vendors. Digital armour was also added along with the energy-based Kryptonian helmets. Cyberscan and FACS sessions were conducted with the actors, and polarized and non-polarized reference photos were taken. Superman’s cape and costume were scanned in high detail – the cape in particular became a direct extensions of Superman’s actions. “Our main reference for the cape were illustrations from Alex Ross,” states Rocheron. “We had the cape here at MPC so we could really study its thickness and the velvetness. The light is very soft on it. We did a cloth solve in nCloth and we wrote a number of tools in animation to be able to animate the cape and see it in real time. Once the animation was approved, we had a basic representation of the cape and we would then use that to drive the nCloth simulation.”
In one shot, Superman fist fights with an eight foot Kryptonian. “We shot a live action piece and just replaced the performance capture stunt guy and added the cape onto Superman,” says Rocheron. Then we just thought about it and said it would be much cooler – since the Kryptonian is very tall then Superman should fight against him while he’s hovering. We did those shots as entirely digital shots. It has that very cool feeling of flying around and punching him from all different directions.”
6. For each shot, it then became a matter of choosing the right transition point. “There’s a little transition zone that’s maybe only one or two frames,” says Desjardin. “We knew that we wanted to keep Superman real in certain places because it was say super-sharp and we want to use that to anchor the shot, even if just for a couple of frames, and then we’re going to go into digital because it’s crazy right after this.”
“We layered a couple of other things on top of that,” adds Desjardin. “One, if there’s a punch being thrown, you can lose the arm real fast if it’s too fast. So a lot of times the arm of a CG character may be going just slightly faster than a human’s – we put a sonic boom-type signature around the forearm and we might put a little heat luminance on the leading edge surfaces of the fist. It puts an idea in your brain that it’s moving really, really fast even if it isn’t.”
7. Not only were fights being depicted with digi-doubles and environments, they also traversed cornfields, through buildings, glass walls, on roads and against flying A-10s. That necessitated incredible destruction and for this MPC looked to its finite element analysis tool Kali, which had first been developed for the wooden pagoda destruction in Synder’s Sucker Punch. This time around, following a few years of development, Kali was able to handle so many more kinds of materials. “So we could take a tarmac and break it differently,” explains Rocheron. “It’s more resistant so it has a crater but cracks at certain places near the surface. And Superman crashes into a bank vault and crashes through a glass door with a metal frame and finally into the vault which is made of super strong steel so we made that bend and wraps around him.” Particle sims and Flowline were also incorporated into the destruction pipeline for Smallville.
“Down in Metropolis there was a very clear design edict that came from Zack about how the evolution of the battle was going to be,” says Desjardin of the lighting design for the film’s third Act. “The sun had to be not quite setting when the Black Zero comes down and then very quickly it’s in its setting position and by the time Superman and Zod go to fight it’s down below the horizon and there’s a Hawaiian cloud and colorful clouds and it’s getting dark with a twilight sky – more an ambient look. Then once Supe and Zod jump up into that sky then you have some other lighting options with the sky and certain lit billboards. It’s a way to make the city come alive to make it even more dramatic to keep characters backlit.”
To create a convincing Metropolis, Dneg looked to Esri’s CityEngine to help procedurally deliver the city, a tool it had first employed for the sprawling future world of Total Recall. “That was a much more sci-fi based role,” notes Dneg visual effects supervisor Ged Wright, “so we took what they had done and extended it a great deal. The work we were doing was based around the Downtowns for New York, LA and Chicago and that gave us the building volumes for heights. We’d skin those volumes with kit parts but most of it then had to fall down! So we had to rig it for destruction and use it for other aspects of the work as well.”
In addition, fire, smoke and water simulation tools were further developed at Dneg. The studio moved from its existing proprietary volume renderer DNB to working in Houdini and rendering in Mantra for elements such as fireball sims. Dneg’s in-house fluids tool Squirt also benefited from new development to handle larger scale sims and interaction for more tightly coupled volumes and particles. Overall, the studio’s rendering pipeline has moved to a more physically based approach in RenderMan.
Within the Metropolis sequence there were other numerous requirements including attacking and destroying aircraft and, of course, digital representations of Superman and Zod when they fight. One particular element Dneg contributed and also shared with other facilities was Zod’s armour. “There was no practical armour for Zod,” states Wright, “he only wore a mocap suit. We took concept art, came up with a ZBrush sculpt of the armour and could show them turntables of what it would look like during filming.”
Dneg took MPC’s Superman and Zod models and adjusted them for their own pipeline in order to rig, groom hair and adjust shaders. “We also have more of a photogrammetry approach to facial,” says Wright, “so we made the actors sit there again with an eight camera rig – similar to Light Stage but portable and gives us polarized photography to reconstruct the facial expressions.”
Zod and Superman battle in amongst buildings and when they hit each other tend to generate enormous shockwaves that rip skyscrapers in half. Although much of this was completely digital (some live action was shot in Chicago and then on Vancouver greenscreen soundstages), Wright says Dneg implemented real photography onto its digital doubles wherever possible. “Because you have their performances you engage with it – and your eyes go straight to their faces. If they’re big enough in frame and doing something, you want to use a photograph of them. As soon as you buy that and get what’s going on, you’re more willing to take on board what’s going on with the rest of the frame.”
Adds Wright: “There’s one shot where Zod hits Superman up the side of the building. Superman is hovering above. Zod starts running up the side of the building. This is just before he rips his armour off and is taking in more of the sun’s energy. Superman flies down to hit him and the two of them collide causing that shockwave. DJ and Zack were both really keen to make it feel like two Gods were fighting, and they were at the height of their powers right then.”
Along with the tornado, artists added ground dust and debris, farm buildings and uprooted trees. “We also had to create digital versions of all vehicles that were shot on set as well as a number of additional vehicles to suggest a longer line up of traffic stopped on the freeway,” says Wiebe. “As the sequence progresses most of the vehicles end up getting damaged or destroyed to some degree so in addition to a typical vehicle rig for the basic motions and wind buffeting, we also created a system where we could dynamically damage the vehicles based on collisions with one another or based on forces as was the case with the Kent truck which gets destroyed at the end of the sequence.”
An earlier sequene of Clark saving workers from a burning oil rig made use of reference of the BP Deepwater Horizon explosion and the Toronto Sunrise Propane factory explosion. “We tried to make sure we were as accurate as possible regarding the look of the fire and smoke plumes that are generated by oil fires, which have a very unique and identifiable quality,” notes Wiebe. “The exterior plates where shot with the actors on a set built helipad, with a real helicopter, and green screen surrounding 3/4 of the set. From there we created and entirely digital oil rig, and we would composite the actors and helipad onto our digital rig, or at times we would replace the helipad as well. Many of the hero helicopter shots also utilized a digital version of the helicopter in order to get the interactive lighting and reflections matching.”
“The oil rig collapse was a series of rigid body simulations created using Thinking Particles,” says Wiebe. “From there we would also add fire, smoke and dust trailing off the rig using Flowline, which was also used for the fluid sim when the rig came crashing down into the ocean below. There were also a series of explosions happening throughout the sequence also using Thinking Particles for the RBD’s and Flowline for the fire and smoke.”
Some of the work included rig and camera removal and also clean-up of the bridge railing. “There was a POV camera angle from the bridge looking down at the bus as it was sinking into the water,” notes Look visual effects supervisor Max Ivins. “The bridge part was shot separately from the sinking shot and the exterior sinking was shot in a rock quarry so there was no moving water like the river. We did some CG replacements of the vents on top of the bus and we had to make the remnants of the splash when someone runs up to look at the bus over the edge. We added the foam ring and the bubbles coming up. Used stock footage and CG elements to make a post-splash surface of the water.”
For interior shots of the bus with the children, Look altered water levels to make the danger appear more prominent. “They had a surface outside of the bus that was basically the same as the inside of the bus – they couldn’t really sink it because there are kids involved,” explains Ivins. “So they had to make it look like the outside water was 2 feet taller than the inside water that is rushing in to give it that sinking feeling. So we did whole simulations and cleaned up some of the lighting. We made bubbles coming up and making it turbulent.”
Look’s other contributions to the film included several monitor comps, including ones at NORAD, and some artefact clean up for a flashback signature shot that had been time-ramped of Clark wearing a cape and with his hands on his hips in front of some blowing dandelion heads.
All images and clips copyright 2013 Warner Bros. Pictures.
Desjardin notes that the intent was to shoot a more handheld (the DOP was Amir Mokri) and documentary-style film than previous outings in this comic book character’s ‘verse. “We had to think about what that would mean since we also had to photograph some crazy action,” says Desjardin. “So for a lot of the previs we did, we’d start to think where our cameras were and where our cameraman was. A lot of the rules are the Battlestar Galactica rules for the space cams that Garry Hurtzel developed for that mini-series where we want to make sure if we’re translating the camera at all it makes sense. Unless the action is so over the top, like in the end where Superman is beating up Zod – we had to break it a bit.”
fxguide talks to the major players responsible for bringing to life the visual effects of Man of Steel: overall supervisor John ‘DJ’ Desjardin, and Weta Digital, MPC and Double Negative. With so much work in the film, we delve down into just three of the many tech accomplishments:
1. The tech of Krypton
2. Live action and CG takeovers: the Smallville confrontation
3. Destroying a city: the invasion of Metropolis
And we also take a look at PLF’s previs work, Scanline’s tornado and oil rig effects and Look Effects’ work on the bus crash.
The tech of Krypton
Act I of the film takes place on Krypton, facing destruction from an instable core. Weta Digital created alien planet environments, creatures and also the key means of display – a technology the filmmakers came to call ‘liquid geo’ meaning liquid geometry. “Basically,” explains Weta Digital visual effects supervisor Dan Lemmon, “it’s a bunch of silver beads that are suspended through a magnetic field, and the machine is able to control that magnetic field so that the collection of beads behave almost like three-dimensional pixels, and they can create a surface that floats in the air and describes whatever the thing is you’re supposed to be seeing.”
Listen to our fxpodcast interview with Weta Digital visual effects supervisor Dan Lemmon.
The liquid geo devices appear in the planet Krypton scenes, as well
as later sequences on the Kryptonian ship the Black Zero. Similar
technology making up a panel display resembling a Greco-Roman bas-relief
– but achieved via a different method – is present in a scene in which a
hologram of Superman’s father, Jor-El, explains the history of Krypton
to his son.In creating the liquid geo which took the form of anything from wide planet views, x-rays, displays on floating robots and even to depict Jor-El communicating with his wife Lara; Weta Digital took these steps:
1. The look – The beads, which up close would appear to be pyramids with a slight bevel, were designed to create a surface of the object they were depicting inside some kind of console. “Essentially we would have the normals of the objects that we were targeting provide a simulation with an orientation that one of the most dominant sides of the pyramid would align with,” explains Weta Digital lead FX TD Brian Goodwin.
- See how Weta Digital created the liquid geo and History of Krypton sequences, thanks to our media partners at WIRED.
2. Modeling and animation – The models used for animation ranged from purpose-built (Lara’s face) to ones appearing in grander scenes (such as approaching scout ships). Says Goodwin: “We had to develop a pipeline to bring in assets, so instead of going through the route of reducing the polygon count to something usable what we would then do – you would take the model in whatever way it was made and just scatter discrete points onto it, and extract the matrix onto the animation and copy these points onto the matrix and have these sparse points behaving in a way that the model would.”
“We had animation provide us with geometry that we would then track beads on,” adds Goodwin. “Those beads would then be turned active in front of the actual console and the console would decide what beads it needed to provide to actually draw particles from the actual earth we described or this invisible bowl.”
3. Simulation – After animation, artists ‘copied’ little beads onto the animated geometry for a pre-sim’d lighting version to get approval on how the object would read. Sims were then run “on all the targets which would be discrete beads floating around on top of the surface which would have its own set of parameters,” says Goodwin. “The bead size or the turbulence that would crawl along the surface constantly updating the orientation was based on the normal provided by the surface. That was then saved to disk and we would use that sim as the final target for the simulation.”
Something different
Weta Digital senior visual effects supervisor says the liquid geo look “stemmed from an idea that Zack Snyder had. He wanted to do something that was interesting and different in the way that you saw information presented. He didn’t want to do the typical screen. So one of the ideas Zack had was to make it a little bit more tactile – we looked at different things – the idea of that pinboard that you put your hand in and you see the shapes sort of form was what we had in our minds, but the more we looked at that the more we realized you can’t do just something like that it’s too simple and limiting – the shapes really need to transform – you need something that has that look but be more liquid.”
The sims were based on a fluid sim. “We used Houdini’s internal FLIP
solver that gave us the pressure, the sense of volume maintenance,”
notes Goodwin. “We’d have the console sim inside an invisible membrane,
and there would be currents that we would describe with what we would
find aesthetically pleasing within the shot.”Weta Digital senior visual effects supervisor says the liquid geo look “stemmed from an idea that Zack Snyder had. He wanted to do something that was interesting and different in the way that you saw information presented. He didn’t want to do the typical screen. So one of the ideas Zack had was to make it a little bit more tactile – we looked at different things – the idea of that pinboard that you put your hand in and you see the shapes sort of form was what we had in our minds, but the more we looked at that the more we realized you can’t do just something like that it’s too simple and limiting – the shapes really need to transform – you need something that has that look but be more liquid.”
“The console was like a cup turned to the side,” adds Goodwin, “so whereas gravity would be Y pulling ‘down’, in this case the Y is facing into the back of the ‘cup’ console, ” which means essentially gravity is pulling from the back of the cup towards the actors watching the display. The beads then fall (after they pass some threshold) – towards the inside of the geometry. It is as if one poured the beads into a glass bowl, all filmed from below the bowl looking up, – except everything is turned on its side, so the beads fall towards the viewer, and the geometry of say a planet Earth is a hollow glass bowl between the sea of original beads and the viewer. “We sort of reversed it by having gravity faced towards the front of the membrane which would be the meniscus and the surface of the actual water was the front of the console, so we would have a constant force pushing towards the front which would give us a sense of a flat surface with water traveling around on it, but this surface was never rendered.”
4. Noise – After simulation, Weta Digital ran every bead through a temporal filter to remove jitter. ”Even with the highest RenderMan settings we would still face a lot of noise, and that led us to taking out all the noise within the simulation,” continues Goodwin. “Even the most subtle twist of the bead half a degree, 2 degrees, would, because of being mostly specular, would result in seeing a completely different point within the IBL (Image Based Lighting) and that would create a tremendous amount of variation between the slightest bit of movement. By filtering it, it softened the whole piece out, to the point of sometimes we needed to get a little bit of grittiness back in because smoothing out the beads too far would look too boring.”
The team could control the flow from back to front and back again. “We allow the simulation, to go through a series of noise fields to make it a little bit more interesting and then join the target,” says Goodwin. “Then once it’s joined the target it would essentially no longer be registered in the simulation. Once the target appears we release it and it finds its way back into the simulation, by using the opposite force – we essentially use level sets to create some sort of pressure and it would know when it was inside or outside this world and a bunch of rules that would dictate whether it was allowed to be outside.”
5. Lighting and rendering – Lighting solutions were from taken the set. For the consoles, Weta moved to the next level of RenderMan to take advantage of improved raytracing and instancing objects. Motion blur was also a particular challenge. “We had the traditional motion blur – in that our particles do technically move,” says Goodwin. “We did a test where we rendered the objects and we would compare the motion blur represented from the object’s motion blur literally straight from animation and we would line that up with the render we would get out of the beads. In some cases we would have the vectors that were provided to the renderer shortened ever so slightly, so that we would have, as the beads form a target across two frames, would result in spikes within the motion. You’d have a bead that travels across the length of the frame and you’d have a long streaky specular highlight – ultimately it’s shading at one end and smudging it and in that case we’d shorten the motion blur so it wouldn’t create bright little spikes.”
The history lesson bead shots were created slightly differently and without an underlying sim. “In addition to working out all the technical aspects, just figuring out aesthetically and creatively what actual images we would use to tell the story of Krypton was really important,” says Lemmon. “We did quite a lot of concept art based on various sculptures. We looked at bas-relief from the Rockefeller Center, we looked at Greco-Roman references and explored those kind of aesthetic looks but applied to spaceships and alien planets and alien technology – if you were depicting a sci-fi world through the medium of stone sculpture, what that might look like.”
Weta Digital had originally planned to do these shots with the liquid geo simulation engine as well, but ultimately the look required was a different one. “It’s more of a relief style,” says Goodwin. “It actually looks like the space that each object exists within looks like it exists within a world that’s flattened. The idea was that it went from an idea of being a simulation forming things to being a relief. If you look closely the background is flowing with the text and graphics – the beads travel along it – but outside of that, things weren’t actually moving to and fro.”
The action for the history lesson was animated based on greenscreen performances by Russell Crowe and Henry Cavill in what became, according to Goodwin, ‘humongous spaces’. “It was traveling hundreds of thousands of meters in a Maya scene, then we sent that through a projection,” says Goodwin. “It was all animated in world space and would then send that through a transformation which we would then project onto a back wall and relief it. We needed to represent all the information in a confined space.”
The liquid geo shots on Krypton occur while the planet is both under siege from Zod’s crew and as it becomes unstable and, ultimately, implodes. Before that happens, wide views of Krypton depict an alien atmosphere, which are mostly Weta Digital environments, spacecraft and creatures.
One shot of Jor-El riding a winged creature made use of a buck and gimbal set up to replicate the move fashioned in previs. “We shot elements of Russell Crowe in his flying costume on that buck,” explains Lemmon, “and using previs as a guide tried to match both camera and the movement of the gimbal to the previs. Then we put those elements onto a digital creature and a digital world. But of course some of the stuff that moved in such a way that it wasn’t possible to get those movements. There were shots where we transitioned in and out – we go all digital – then Russell Crowe for just five frames or so, and then back to a digital character.”
For some dramatic shots of Zod’s ships approaching the house of El, Weta Digital referenced scenes from Apocalypse Now. “Zod flies in these attack ships and his descent on The House of El was modeled on the Rise of the Valkyries sequence,” says Lemmon. There’s actually a shot that everybody thinks is in Apocalypse Now, but isn’t, it’s in the Apocalypse Now poster that – the sun shot – the ships flying out of the sun.”
Suits of armour worn by characters on Krypton were mostly CG additions to the actors wearing gray suits with tracking markers (although female characters wore practical armor on set). The tracking of these shots is therefore particularly complex to match all the movements of actors sometimes engaged in hand to hand combat such as Zod’s attack after Superman’s pod is launched from Krypton.
A phaser battle contained a specific look for blasts with plasma residue. “Those were mostly Houdini simulations,” notes Lemmon. “We wanted to avoid a straight laser beam and do something that had a little bit more interest in it. The idea was the beam moves through the air and charges and ionizes particles in the air. In Krypton there’s particles that float in the air the same way that dust does here, but we treated them as if they were heavier and got more excited by the beam. As that the beam moves through the air it glows and starts to swim a little bit and leave that residue, particularly when it hits somebody.”
Aerial battle shots employed Krypton’s hazy environment and shafts of light through rock pillars to add depth. “In busy sequences like that it’s important to compose things so that you can actually see what’s going on and see who’s good and bad and who’s winning,” states Lemmon. “One thing that drives me nuts in big action sequences is when you can’t actually – it’s just noise – and you can’t see what’s going on.”
Later, as the planet begins to destroy itself, the studio worked to show various angles of the destruction including a ‘from space’ view. Lemmon says he enjoyed “figuring out how it would look – playing it out as a geo-thermal event that’s influenced by the planet’s magnetic field, and maybe have it collapse along the equator rather than blow out spherically and implode first then explode afterwards. Playing around with those ideas was a lot of fun.”
Live action and CG takeovers: the Smallville confrontation
A major challenge faced by the filmmakers and visual effects crew on Man of Steel was to realize elaborate close-combat fight scenes between Superman and his Kryptonian foes. They wanted to take advantage of digital effects to portray superhuman strength and powers, but without what had been perceived previously as ‘cutting’ from live action to an obvious digi-double and environment. Instead, the filmmakers wanted these shots to be executed as seamless takeovers.Desjardin explains: “When we do these fights and these hyper-real things, we don’t want to do the traditional, ‘OK I’m a cameraman, I’m shooting a clean plate, I’m going to pan over here to follow the action that’s not really there yet but we’ll put the action in later. Because that’s us animating the characters to the camera. So we would do that animation with the characters – grappling, punching or flying away – and we would take the real guys up until the point until they were supposed to do that and we’d cut. Then we’d put an environment camera there and take the environment. And then a camera for reference of the actors and get each moment. So then we had a set of hi-res stills for the environment and the characters. Then in post we take the digi-doubles and animate them according to the speeds we want them to move in our digital environment.”
This approach was pioneered for the Smallville encounter in which Superman confronts Zod and his crew after they have threatened Martha Kent. They fight on the streets of the town and are further attacked by the military via A-10s and ground assault troops. MPC handled visual effects for this sequence (in addition to many other shots in the film ranging from Arctic scenes to shots in the upper atmosphere when Louis and Superman are taken to the Black Zero).
In order for the seamless takeovers to occur – and for the shot to continue with a pan, tilt or other movie – a new capture and post-production process was proposed by MPC visual effects supervisor Guillaume Rocheron, in conjunction with Desjardin. Here’s how it broke down for a typical Smallville shot:
1. The shot would be previs’d and particular fight choreography for the fights established by stunt coordinator Damon Caro.
2. Knowing from the previs the shot that was required, live action portions of the scene would be filmed in little pieces. “If say Superman was being punched and would land 50 meters away, we would shoot our start position and end position, and then bridge that gap with the CG takeovers,” says Rocheron.
3. A camera rig dubbed the ‘Shandy-cam’ (named after on-set VFX coordinator Shandy Lashley) obtained keyframes of the actor. “It’s a six still camera rig that’s built on a pipe rig so that you can run it in at the end of a setup and get stills of keyframes of a performance or an expression,” says Desjardin, “and then we could use those hi-res stills to project onto the CG double and get really accurate transition lighting and color – right from the set.”
4. On set, another camera rig was also used to capture the environment. “We ended up calling it Enviro-cam,” notes Rocheron. “It was a rig where we mount a Canon 5D and a motorized nodal head, and that allows us to capture to full 360 environments at 55K resolution for every single shot. The capture time is very quick – we were taking between 2 and 4 minutes for every shot, so it was really easy – the same way we capture HDRIs.”
“The sets are there so why not capture them?,” says Rocheron. “It basically allows you to film what is not filmable. Here there’s no cuts with no interruption. We also did a lot of entirely digital shots which had no live action. So we had our Enviro-cam, so we used that to capture the environment rather than a plate and we could put our CG characters in there.”
The set capture resulted in lighting and textures that could be re-projected onto geometry (the sets were also LIDAR’d to aid in reconstruction). “We wrote a little pipeline in Nuke that allowed us to stitch all the photos together and then very simply calibrate them with the Smallville geometry,” says Rocheron. “We would calibrate just one angle, because for the full dome, all the photos would get automatically calibrated on the geometry. For us it was a very good process – it wasn’t just a sphere. Everything was re-projected in two and a half D on the geometry to get parallax and the camera would travel technically in all directions.”
Superman is also seen in several sequences, of course, flying though clouds. “We used volumetric clouds,” says Rocheron, “using an internal tool we have for clouds to mobilize geometry and transform it into volumes and refine with layers of advection and noises for the fine details.” In terms of environments Superman flies through, such as over the Arctic circle, over canyons in Utah, over Africa, and over the Dover cliffs, MPC developed these first in Terragen and then took them through to matte paintings and geometry.
5. Full-screen digi-doubles were of course a major component. MPC led the digi-double Superman, Zod, Faora and other Kryptonian creations which were shared with other vendors. Digital armour was also added along with the energy-based Kryptonian helmets. Cyberscan and FACS sessions were conducted with the actors, and polarized and non-polarized reference photos were taken. Superman’s cape and costume were scanned in high detail – the cape in particular became a direct extensions of Superman’s actions. “Our main reference for the cape were illustrations from Alex Ross,” states Rocheron. “We had the cape here at MPC so we could really study its thickness and the velvetness. The light is very soft on it. We did a cloth solve in nCloth and we wrote a number of tools in animation to be able to animate the cape and see it in real time. Once the animation was approved, we had a basic representation of the cape and we would then use that to drive the nCloth simulation.”
Superman’s cape
“Zack was very amenable to shooting a lot of stuff with Henry Cavill without using a real cape at all,” says Desjardin, noting that as the cape was envisaged almost as its own character, it would need significant visual effects art direction.
But the cape’s VFX also had to remain within the illusion of the filmmaking. “There were a couple of shots where someone might say, ‘I don’t quite like the way the cape moves there because it looks like it’s a real cape with a wire attached to the end of it to pull it,’ and Zack would say, ‘That’s fine! Don’t change that animation – I want people to think that maybe we did that, even though it’s a CG cape.’”
MPC used the latest version of RenderMan and its raytracing
capabilities to help with the chainmaille look of Superman’s suit and
the Kryptonian suits and armour. “They were all painted as displacements
but we did hi-res displacement,” says Rocheron. “Raytracing allows you
to capture that very subtle detail between the reflective pattern of the
chainmaille and the light absorption of the blue part of the suit,
since the underlying layer is bleeding through. We have infinite area
lights which are the dome and the finite direct area lights that are the
direct light sources you want to position in space. And that physically
based setup gave us a terrific look for the reflections and the
fall-off of the light – really key to get the details of the suit and
armour, which in reality was mostly black.”“Zack was very amenable to shooting a lot of stuff with Henry Cavill without using a real cape at all,” says Desjardin, noting that as the cape was envisaged almost as its own character, it would need significant visual effects art direction.
But the cape’s VFX also had to remain within the illusion of the filmmaking. “There were a couple of shots where someone might say, ‘I don’t quite like the way the cape moves there because it looks like it’s a real cape with a wire attached to the end of it to pull it,’ and Zack would say, ‘That’s fine! Don’t change that animation – I want people to think that maybe we did that, even though it’s a CG cape.’”
In one shot, Superman fist fights with an eight foot Kryptonian. “We shot a live action piece and just replaced the performance capture stunt guy and added the cape onto Superman,” says Rocheron. Then we just thought about it and said it would be much cooler – since the Kryptonian is very tall then Superman should fight against him while he’s hovering. We did those shots as entirely digital shots. It has that very cool feeling of flying around and punching him from all different directions.”
6. For each shot, it then became a matter of choosing the right transition point. “There’s a little transition zone that’s maybe only one or two frames,” says Desjardin. “We knew that we wanted to keep Superman real in certain places because it was say super-sharp and we want to use that to anchor the shot, even if just for a couple of frames, and then we’re going to go into digital because it’s crazy right after this.”
“We layered a couple of other things on top of that,” adds Desjardin. “One, if there’s a punch being thrown, you can lose the arm real fast if it’s too fast. So a lot of times the arm of a CG character may be going just slightly faster than a human’s – we put a sonic boom-type signature around the forearm and we might put a little heat luminance on the leading edge surfaces of the fist. It puts an idea in your brain that it’s moving really, really fast even if it isn’t.”
7. Not only were fights being depicted with digi-doubles and environments, they also traversed cornfields, through buildings, glass walls, on roads and against flying A-10s. That necessitated incredible destruction and for this MPC looked to its finite element analysis tool Kali, which had first been developed for the wooden pagoda destruction in Synder’s Sucker Punch. This time around, following a few years of development, Kali was able to handle so many more kinds of materials. “So we could take a tarmac and break it differently,” explains Rocheron. “It’s more resistant so it has a crater but cracks at certain places near the surface. And Superman crashes into a bank vault and crashes through a glass door with a metal frame and finally into the vault which is made of super strong steel so we made that bend and wraps around him.” Particle sims and Flowline were also incorporated into the destruction pipeline for Smallville.
Destroying a city: the invasion of Metropolis
Determined to conquer Earth and transform it into a new Krypton with ‘world engine’ machines, Zod launches northern and southern hemisphere strikes in Metropolis and the Indian Ocean, respectively. The result, until Superman battles and then defeats the Indian Ocean world engine, is that significant parts of Metropolis and its soaring skyscrapers are destroyed – a task given to Double Negative. The studio also realized further full-scale destruction as Superman and Zod wreak havoc on remaining buildings and each other.“Down in Metropolis there was a very clear design edict that came from Zack about how the evolution of the battle was going to be,” says Desjardin of the lighting design for the film’s third Act. “The sun had to be not quite setting when the Black Zero comes down and then very quickly it’s in its setting position and by the time Superman and Zod go to fight it’s down below the horizon and there’s a Hawaiian cloud and colorful clouds and it’s getting dark with a twilight sky – more an ambient look. Then once Supe and Zod jump up into that sky then you have some other lighting options with the sky and certain lit billboards. It’s a way to make the city come alive to make it even more dramatic to keep characters backlit.”
To create a convincing Metropolis, Dneg looked to Esri’s CityEngine to help procedurally deliver the city, a tool it had first employed for the sprawling future world of Total Recall. “That was a much more sci-fi based role,” notes Dneg visual effects supervisor Ged Wright, “so we took what they had done and extended it a great deal. The work we were doing was based around the Downtowns for New York, LA and Chicago and that gave us the building volumes for heights. We’d skin those volumes with kit parts but most of it then had to fall down! So we had to rig it for destruction and use it for other aspects of the work as well.”
The previs effort
Director Zack Snyder worked closely with previs and postvis artists at Pixel Liberation Front to orchestrate some of Man of Steel’s most dynamic scenes.
The studio worked on more than 15 sequences, with PLF supervisor Kyle Robinson suggesting that some of the most challenging previs creations were the oil rig rescue and the Metropolis invasion. “They shot that oil rig rescue in a parking lot in Vancouver and they didn’t know how big the greenscreen needed to be, so we modeled it for them to the specify what was necessary.”
Superman’s cape was also a major challenge. “We didn’t run a cloth sim for the cape,” says Robinson. “If we had run a cloth sim it would only appear to react to the forces applied to it and not be a character in its own right. We rigged the cape to have a deformation in it and it had a bone in it so we could move and pose it, for the right position for comic book poses.”
Using Maya for main animation and its proprietary character and camera rig, with extra work done in After Effects, PLF followed Synder’s boards and art department concepts to flesh out scenes, and later help editorial with postvis.
For building destruction, in particular, the studio re-wrote its own
asset system to be geared towards dynamic events. An implementation of
the Bullet engine inside Houdini – dubbed Bang – became Dneg’s main
destruction solver, with a core philosophy of allowing for quick
iterations with heavy control. “We wanted to be able to run an RBD event
and trigger all these secondary events, whether it was glass or dust
simulations – all of those things needed to be chained up and handled in
a procedural way,” says Wright. “One of the advantages of this was
that, because it was all based around a limited number of input
components, you can make sure they’re modeled in a way they’re useable
in effects – you can model something but they’ll be another stage to rig
it for destruction.”Director Zack Snyder worked closely with previs and postvis artists at Pixel Liberation Front to orchestrate some of Man of Steel’s most dynamic scenes.
The studio worked on more than 15 sequences, with PLF supervisor Kyle Robinson suggesting that some of the most challenging previs creations were the oil rig rescue and the Metropolis invasion. “They shot that oil rig rescue in a parking lot in Vancouver and they didn’t know how big the greenscreen needed to be, so we modeled it for them to the specify what was necessary.”
Superman’s cape was also a major challenge. “We didn’t run a cloth sim for the cape,” says Robinson. “If we had run a cloth sim it would only appear to react to the forces applied to it and not be a character in its own right. We rigged the cape to have a deformation in it and it had a bone in it so we could move and pose it, for the right position for comic book poses.”
Using Maya for main animation and its proprietary character and camera rig, with extra work done in After Effects, PLF followed Synder’s boards and art department concepts to flesh out scenes, and later help editorial with postvis.
In addition, fire, smoke and water simulation tools were further developed at Dneg. The studio moved from its existing proprietary volume renderer DNB to working in Houdini and rendering in Mantra for elements such as fireball sims. Dneg’s in-house fluids tool Squirt also benefited from new development to handle larger scale sims and interaction for more tightly coupled volumes and particles. Overall, the studio’s rendering pipeline has moved to a more physically based approach in RenderMan.
Within the Metropolis sequence there were other numerous requirements including attacking and destroying aircraft and, of course, digital representations of Superman and Zod when they fight. One particular element Dneg contributed and also shared with other facilities was Zod’s armour. “There was no practical armour for Zod,” states Wright, “he only wore a mocap suit. We took concept art, came up with a ZBrush sculpt of the armour and could show them turntables of what it would look like during filming.”
Dneg took MPC’s Superman and Zod models and adjusted them for their own pipeline in order to rig, groom hair and adjust shaders. “We also have more of a photogrammetry approach to facial,” says Wright, “so we made the actors sit there again with an eight camera rig – similar to Light Stage but portable and gives us polarized photography to reconstruct the facial expressions.”
Zod and Superman battle in amongst buildings and when they hit each other tend to generate enormous shockwaves that rip skyscrapers in half. Although much of this was completely digital (some live action was shot in Chicago and then on Vancouver greenscreen soundstages), Wright says Dneg implemented real photography onto its digital doubles wherever possible. “Because you have their performances you engage with it – and your eyes go straight to their faces. If they’re big enough in frame and doing something, you want to use a photograph of them. As soon as you buy that and get what’s going on, you’re more willing to take on board what’s going on with the rest of the frame.”
Adds Wright: “There’s one shot where Zod hits Superman up the side of the building. Superman is hovering above. Zod starts running up the side of the building. This is just before he rips his armour off and is taking in more of the sun’s energy. Superman flies down to hit him and the two of them collide causing that shockwave. DJ and Zack were both really keen to make it feel like two Gods were fighting, and they were at the height of their powers right then.”
Rounding out Man of Steel’s effects
Helping to round out the effects work on the film were companies like Scanline and Look Effects. Here’s how they added crucial shots to the film.Scanline – tornado and oil rig
Scanline delivered shots of the tornado sequence in which Smallville residents shelter in an underpass from an approaching twister, while Clark Kent’s father Jonathan returns to a vehicle to rescue a pet dog. “For the tornado itself,” explains Scanline visual effects supervisor Chad Wiebe, “we actually came up with a unique methodology by combining a number of individual fluid sims which would be wrapped around the funnel. This allowed us to create a bigger and denser funnel without some of the overhead that would have been generated by trying to do a single sim for the entire funnel. This also allowed us to pick and choose from a library of different sims which gave us greater control over the look and speed of the funnel, the variation of different parts of the funnel, as well as the technical aspects such as density and resolution for some of the more close up shots.”Along with the tornado, artists added ground dust and debris, farm buildings and uprooted trees. “We also had to create digital versions of all vehicles that were shot on set as well as a number of additional vehicles to suggest a longer line up of traffic stopped on the freeway,” says Wiebe. “As the sequence progresses most of the vehicles end up getting damaged or destroyed to some degree so in addition to a typical vehicle rig for the basic motions and wind buffeting, we also created a system where we could dynamically damage the vehicles based on collisions with one another or based on forces as was the case with the Kent truck which gets destroyed at the end of the sequence.”
An earlier sequene of Clark saving workers from a burning oil rig made use of reference of the BP Deepwater Horizon explosion and the Toronto Sunrise Propane factory explosion. “We tried to make sure we were as accurate as possible regarding the look of the fire and smoke plumes that are generated by oil fires, which have a very unique and identifiable quality,” notes Wiebe. “The exterior plates where shot with the actors on a set built helipad, with a real helicopter, and green screen surrounding 3/4 of the set. From there we created and entirely digital oil rig, and we would composite the actors and helipad onto our digital rig, or at times we would replace the helipad as well. Many of the hero helicopter shots also utilized a digital version of the helicopter in order to get the interactive lighting and reflections matching.”
“The oil rig collapse was a series of rigid body simulations created using Thinking Particles,” says Wiebe. “From there we would also add fire, smoke and dust trailing off the rig using Flowline, which was also used for the fluid sim when the rig came crashing down into the ocean below. There were also a series of explosions happening throughout the sequence also using Thinking Particles for the RBD’s and Flowline for the fire and smoke.”
Look Effects – the bus crash
In the film, Clark remembers key moments from his youth, including those they gave hints to himself – and others – of his tremendous powers. One is the crash of a schoolchildren-filled school bus. After it blows a tire and launches off a bridge into a river, Clark dives out of the bus and pushes it to the bank, and then rescues another child from under the water. Much of the crash was filmed practically on a bridge and quarry location, and then on a tank stage. Look Effects helped piece the scene together.Some of the work included rig and camera removal and also clean-up of the bridge railing. “There was a POV camera angle from the bridge looking down at the bus as it was sinking into the water,” notes Look visual effects supervisor Max Ivins. “The bridge part was shot separately from the sinking shot and the exterior sinking was shot in a rock quarry so there was no moving water like the river. We did some CG replacements of the vents on top of the bus and we had to make the remnants of the splash when someone runs up to look at the bus over the edge. We added the foam ring and the bubbles coming up. Used stock footage and CG elements to make a post-splash surface of the water.”
For interior shots of the bus with the children, Look altered water levels to make the danger appear more prominent. “They had a surface outside of the bus that was basically the same as the inside of the bus – they couldn’t really sink it because there are kids involved,” explains Ivins. “So they had to make it look like the outside water was 2 feet taller than the inside water that is rushing in to give it that sinking feeling. So we did whole simulations and cleaned up some of the lighting. We made bubbles coming up and making it turbulent.”
Look’s other contributions to the film included several monitor comps, including ones at NORAD, and some artefact clean up for a flashback signature shot that had been time-ramped of Clark wearing a cape and with his hands on his hips in front of some blowing dandelion heads.
All images and clips copyright 2013 Warner Bros. Pictures.
Making 'Man of Steel': a conversation with visual effects supervisor John 'DJ' DesJardin "We wanted to see Superman punch things." By Bryan Bishop
Zack Snyder’s Man of Steel opened to record-breaking box office numbers, thanks in no small part to its arresting visual sequences.
One of the individuals behind those scenes is the film’s visual effects
supervisor, John “DJ” DesJardin. First drawn to effects work after
seeing the likes of 2001: A Space Odyssey and Star Wars, DesJardin has since carved out a distinguished career, with films like Watchmen, Sucker Punch, and the two Matrix
sequels to his name. I spoke with him about what it took to reimagine
the Kryptonian superhero for a modern audience, starting with Superman’s
most iconic action: flying.
Warning: some very mild story spoilers are included below.
"I think a lot of the direction
for the flying came from Zack’s style for the movie," DesJardin says.
"Where he said to us, ‘I want this to be really cinéma vérité,
documentary-style. I think it needs to be handheld. It’s going to be 24
frames, no slow motion. It's going to be anamorphic, so if you look up
at the sun there's going to be a big flare going across the screen.’ And
once we got into that mindset, a lot of camera-style questions got
answered really quickly about how the flying should be done."
Snyder storyboarded the first
flying sequence, in which Clark Kent jumps massive distances before
coming to grips with his aerial ability. The filmmakers shot footage for
the scene, but after it was cut together with the pre-visualized
mock-ups ("previz") they realized their approach didn’t quite work.
Initially, DesJardin says, the plan was that "We’re gonna be kinda going
with him [Superman] into space, and then we’re going to do this slow
push-in to his face, and he’s going to be smiling knowingly like he’s
got control of his flight power." The result, however — a sequence where
the camera magically moved around Superman without much sense of
physical reality — didn’t fit with Snyder’s stylistic directive.
"We ended up affectionately calling them the ‘Donner-cam’ shots, after the Richard Donner version of Superman,"
he says. In the 1978 original, Christopher Reeve was hung on a
controllable wire rig in front of a series of projected backgrounds; the
camera would then move around the actor to sell the illusion of flight.
"But if you look back at that movie, most of the time it looks like
you’re flying with him somehow," DesJardin says. While
effective at the time, it was the opposite of the documentary-style
realism Snyder was aiming for.
The solution was to impose the
same limitations a physical camera would bring to the table. "So that
particular shot I told you about out in space became, ‘We’re actually on
the International Space Station,’ or ‘We have a camera on a satellite,
and Superman happens to be coming towards us.’ And we can pan and tilt
to try to follow him, but that’s all we can do. It’s just the luck of
the draw that he happens to fly by us and we can find him. It’s a
searching camera like that."
Fortunately, the team was able
to rethink the camera positioning thanks to the large amount of
additional data and imagery they captured while filming the live
elements. "Whether it’s the character or the environment that we’re
shooting in," DesJardin says, "we have ways of capturing that stuff
pretty down and dirty. So we can either recreate the location wholesale
on a frame, just like switch from the real to the CG because we know
we’re going to override the [camera] move, or do something else."
Radical changes could require re-shooting Cavill, he says, but "as long
as we’re not trying to change his expression too much or anything like
that, as long as it’s just about body or speed or flight or camera
trajectory, we can change all those things and maintain a performance."
"As long as it’s just about body or speed or flight or camera trajectory, we can change all those things."
That ability to bridge live-action and digital was never more important than in the film’s massive fight sequences. Man of Steel
lays waste to vast portions of both Smallville and Metropolis, and
DesJardin and his team used an assortment of techniques and tools to
adhere to the film’s aesthetic.
"Smallville was a location
that was picked out early on — Plano, Illinois — and we knew that we
were going to have to try to capture that location the way that we
capture our characters to be able to turn them into CG," he says. He
worked with Guillaume Rocheron at the effects company MPC to create a
system that would allow them to capture a location, and then go back in
after the fact and create camera moves once the digital elements had
been added to the shot.
Director Zack Snyder
Often camera moves are faked
on set, DesJardin explains. "You just go, ‘Okay, this guy just got
punched, and now he’s going to go 100 feet that way, so we’re just going
to pan off that way.’ And some day nine months from now we’ll put
something in there." On Man of Steel they wanted the
CG-animated action to drive the camera movement, so DesJardin actually
had the film’s camera operator stop rolling when it was time for such a
move to occur.
Using something they dubbed
the "enviro-cam" — a Nikon camera on a motorized tripod head — they
would then capture 72 high-resolution stills spanning all directions at
that camera position. The result was a high-resolution "environment
sphere" with the same location and lighting conditions as the original
footage. The effects artists could then navigate within that sphere with
their virtual camera when completing the shot.
The 'enviro-cam' allowed them to pick up where they left off
"That’s how we kind of did
everything with Smallville," he says, "and we were able to add layer
upon layer of destruction and fire and things like that. That’s how you
get all these complicated handheld moves during that scene." When
characters get knocked over after an A-10 strafes the street, "that’s
all within the environment sphere of those digital stills … Then we can
go up in the sky, and if we have to zoom in, since it’s such a high-res
sphere and we have so much information, we can. We can change the focal
length and go in really tight, which we do when we’re trying to look up
at the planes in the sky after we get done with looking at the street."
Metropolis was a slightly
different situation. Chicago subbed in during shooting, and DesJardin
and his team surveyed the city with stills, helicopter shots, and
environment cameras. "It was just a lot of data mining to get reference
and reference [camera] moves that we would copy later. But we knew early
on that we would have to build a CG city." Effects house Double
Negative helped create the virtual Metropolis from the ground up — and
when it came time to shoot the climactic battle between Cavill’s
Superman and Michael Shannon’s General Zod, the filmmakers decided to
forego location shooting altogether.
"Metropolis just became this big animal of destruction."
"There’s a point where they
[locations] are really great and give you a lot, and there’s a point
where they’re cost-prohibitive and expensive to shoot in," DesJardin
says, explaining that as scenes and fights become more chaotic, it can
take as much time to replace the filmed location as it can to create new
elements. "Metropolis just became this big animal of destruction.
Especially when you get to ground zero with Zod and Supes, it’s like
okay, this is green screen now, because there’s no way we’re going to
find a location for this."
Despite having taken on such
iconic characters — and the heightened expectations that come with them —
DesJardin takes it all in stride. "I think it’s just kinda cool,
actually," he says. "I think in terms of Watchmen, there was a
thing — even visual effects-wise — where we really wanted to honor the
source material. And Zack directing it really pushed us in that
direction, and we all had our copies of the graphic novel, and we were
trying to visualize those iconic panels from the graphic novel and get
that right."
"Superman, I think, is
different," he says. "Yeah there are some iconic things in terms of
poses when he flies and things like that, but we wanted to make a really
good story about that because there hasn’t been one in a long time."
Following the lead of David Goyer and Christopher Nolan’s darker story
gave the filmmakers the opportunity to create a more grounded,
documentary-style Superman film with its own visual language — adding a
new take to the character’s storied legacy.
A sequel to Man of Steel is already being developed, and while DesJardin is currently working on 300: Rise of an Empire
it’s clear he has plenty of ideas for a second round with the
character. "We were itching to do things in this movie and I think that
for the most part we did them," he says. "We wanted to make the flight
stuff cool the way that we always wanted to see it as fans, and we
wanted Superman to punch and fight and scrap and be mad, and show what
the collateral damage is from gods that beat the hell out of each other
in a human world. And that was what was fun for us, again, as fans of
that kind of material."
"Is there stuff that we were itching to do beyond that? Oh, hell yeah," he laughs. "But we’ll just have to wait and see."
Source: http://www.theverge.com/2013/6/29/4474512/man-of-steel-visual-effects-supervisor-john-dj-desjardin-interview
The visual effects secrets of The Hobbit: An Unexpected Journey We talk to VFX supervisor Joe Letteri about bringing Gollum, Smaug and goblins to life for Peter Jackson's return to Middle-Earth
The Hobbit: An Unexpected Journey sees director Peter Jackson return to Middle-Earth, a decade on from shooting the Oscar-winning Lord of the Rings trilogy. This time around, he's brought with him an arsenal of new film-making technology to realise the dwarves, goblins and dragons that populate JRR Tolkien's fantasy land – 48fps High Frame Rate cameras, new performance capture technology and motion-controlled 3D camera rigs.
Using all this tech to conjure up the world of Middle-Earth is VFX supervisor Joe Letteri, whose resume spans everything from Jurassic Park to Avatar. Stuff sat down with him to discuss his return to Tolkien's world, how shooting in 3D presented its own Hobbit-specific challenges – and how he's bringing Smaug the dragon to life for the second instalment in the trilogy.
Ten years on from Lord of the Rings, there have been huge advances in visual effects technology. What was the highlight of returning to Middle-Earth armed with all this new kit?
We really wanted to cut loose and do things better than we could have done before – with Gollum, especially, we were looking forward to being able to do the motion capture on-set with Andy Serkis. That was our goal ever since we did The Lord of the Rings. Even before we knew that we were going to do motion capture as a technique, we thought, "Isn't that the ideal way to do it?" So it was great to finally be able to do that on The Hobbit with the new rig – and with the 48fps. That's the one area that creatively really helped us, because we can get much finer animation detail.
The Lord of the Rings was famous for its use of forced perspective shots – but that doesn't work when you're shooting in 3D. How did you get around that problem?
We actually had to shoot with two cameras instead of one – and we had to keep both cameras moving in sync. So one camera was the master and one was the slave; the slave camera would be scaled according to the perspective change required.
For example, when we're following Gandalf through Bag End, he's actually on a greenscreen stage that's separate from the Bag End stage where the dwarves are; everything is scaled by 30 per cent. Peter and the camera operators see a real-time composite in their monitors, so that they can follow the action and put the two together. They've got earpieces in so that Ian McKellen can hear all the dwarves and vice versa, and they can all hear Peter's direction.
Through Peter's direction and the dialogue, everyone's working off each other's cues and rehearsing it so that it all weaves together. We were actually able to shoot that in four takes – the rehearsals went really well – but it still took us a year to put that shot together afterwards!
What about shooting in 48fps HFR – what challenges did that present?
It means twice as much work overall in terms of rendering! But creatively it didn't offer challenges so much as it offered opportunities, especially with animation. I think you really see that with Gollum, where you've got fast dialogue and fast, fleeting facial expressions – you can capture a lot more of the subtlety at 48fps than you can at 24fps.
Most performance capture work has been used to realise humanoid characters like Gollum and King Kong – but for the next film you'll be using performance capture to create the decidedly non-humanoid Smaug the dragon. How does that work, exactly?
What we're doing with a character like Smaug is to really interpret the gestures as opposed to looking for the literal performance. So it's halfway between an animated character and a performance capture.
Really, we could've done Smaug in the traditional way – just ask Benedict Cumberbatch to come into a voice booth and record his dialogue, and do everything entirely with keyframe animation. But when we record what Benedict's body is doing, it frees him up to give us some idea of the physicality and intimate the poses, so that what he's got on his mind can come through in how he's performing it – and we'll take that and extend it into what we do with the dragon.
Having returned to the world of Middle-Earth for The Hobbit, are you tempted to revisit the earlier films and tweak the stuff that doesn't match up, like Gollum in The Fellowship of the Ring?
I don't think so – it's not something that Peter's ever indicated that he was interested in doing, and for my mind I'm happy to just have the film be finished as a film. We were lucky to be able to come back and do Gollum now with what we know ten years later – we have the best of both worlds, we can do that in new scenes. But even if that hadn't been the case, I'm inclined to say a film exists in and of its time – and if you want to see something new, go and make a new film.
http://www.stuff.tv/visual-effects-secrets-hobbit-unexpected-journey/news
Monster mayhem: Pacific Rim
It’s not hard to see why the prospect of ILM creating giant monsters versus giant robots for Guillermo del Toro’s Pacific Rim
was something many filmgoers were hungry to see. Perhaps aware of that
anticipation, and knowing the challenges that lay ahead in realizing the
enormous ‘Kaiju’ versus ‘Jaegers’ fights on screen, the studio retooled
many elements of its effects pipeline for the show, including
simulation, lighting and rendering.
“The battle scenes that we tackled are all quite complex in just the recipe that is required to make a shot,” notes senior ILM visual effects supervisor, and now chief creative officer, John Knoll, “so when we’re in the ocean, there’s the robot and the monster and it’s raining, with rain striking the creature, being flung off the creature and cascades of water. The character’s disturbing the ocean, the foam, the sub-surface bubbles and mist and there’s a cocktail of many, many items that all have to be balanced just right to get a shot done.”
We explore ILM’s work for Pacific Rim. The studio was aided in its efforts by Ghost FX, Hybride, Rodeo FX and Base FX. Plus we highlight practical and model work by Legacy Effects and 32TEN Studios, the main title and main on end title design by Imaginary Forces, and the stereo conversion by Stereo D. In addition, Mirada produced the film’s two-minute prologue made up of documentary-style footage and VFX.
Above: watch a featurette on the VFX work.
“There’s also then figuring out the scale issues,” adds ILM animation supervisor Hal Hickel, “figuring out how fast can we move. We didn’t want everything to feel like it was underwater or in super slow motion – typically you go very slow trying to sell that scale, but then all the fights are happening this fast. We couldn’t have that. We had to figure out how fast to go and still have it feel big.”
“And then on top of all that was how to invest in a sense of it being a machine,” continues Hickel. “Because we didn’t want to use motion capture – we talked about it at the beginning of the project and motion capture’s a great tool for certain things, but for this project, particularly with the Jaegers, we just really wanted to make sure it wasn’t completely fluid or organic, that it felt robotic.”
Hickel had time to complete a few test animations for del Toro to establish the look of the colossal robots. “I just had an early version of Gipsy walking down the street with some attitude,” he says, “partly to play with attitude but also play with speed and what are cool looking shots to help sell size. So I had her walking over camera and then with a camera tracking perpendicular to the line of action and tracking along with buildings in the foreground.”
As the battles rage in Pacific Rim, ILM also had to deal with progressive destruction on the Jaegers. “For Gipsy, we had versions A, B, C, D and each of those had a 1, 2, 3, 4,” explains co-visual effects supervisor Lindy DeQuattro. “I think we ended up with about 20 different versions of Gipsy. After each battle she would have new scuffs, dents and scratches, burns. Then she’d go in for repairs and get an arm replaced.”
Like the Jaegers, the Kaiju monsters came in several different forms. That meant that ILM also had to look at various reference for the creatures – from crabs to gorillas, reptiles, bats, dinosaurs and, of course, previous Kaiju and monster movies. The organic nature of the Kaiju and their incredibly large muscles meant that the studio turned to shapes for animation rather than anything automatic or built into the rigging. Flesh sims for wobble and muscle tensing were added, however. “We chose to do that,” says Hickel, “because the Kaiju were so radically different from one another – it didn’t seem economical for us to set up an overall muscle system for the show because it didn’t seem portable from one to the next. I thought, we don’t need a huge amount of muscle detail – we need large masses that do certain things at hero moments, so to me that said shapes.”
Much of these battles occur in water, either waist-deep to the creatures or completely submerged. For fluid simulations, ILM relied on its existing proprietary toolkit, but then also moved into Houdini for elements like cascading water from the characters and for particulates. “We use Pyro for some of that too, and Plume,” says DeQuattro. “We have a variety of tools that we can use depending on what we’re trying to achieve.”
Katana was another tool ILM more fully embraced after experimenting with it on Mission: Impossible – Ghost Protocol, where Arnold had also been used to some degree. “The greatest thing about Katana,” says DeQuattro, “is that we could really light multiple shots from one scene file, which is not something we were previously doing. It made it easy for say a sequence supervisor to really light just about every shot in the sequence. We had some scenes where the sequence supervisor lit all the shots out of one scene file.”
There were then two approaches to modeling CG buildings and architecture. If the buildings were not going to be damaged, then ILM called on its digital matte group. But if destruction work was required as, say Gipsy Danger and a Kaiju rampage through Hong Kong, then the studio entrusted its ‘DMS’ pipeline – model, paint, simulation, destruction.
For the first time, ILM relied on Houdini for destruction effects rather than its traditional internal tools. “In Houdini you want to approach things as procedurally as possible,” says DeQuattro, “so you make definitions of how a pane of glass will break, how a piece of metal will break, how a piece of cement will break, and then name things appropriately so they will be dealt with accordingly inside of Houdini. We had to think about things from the beginning as we were modeling them. How do we want this to break? What material is this made out of? And make sure it was put together correctly.”
Deep compositing in Nuke, another tech adopted by ILM on a larger scale for this show, became crucial in working with the Jaeger/Kaiju fights and the resulting destruction. “It helps solve a problem that’s always existed with volumetrics and motion blur holdouts,” notes Knoll. “I’m really familiar in previous shows of having to do two renders – a hold-out render and a non-hold out render. You always end up with funny edges around your object that you have to split in the non hold out version. There’s a lot of manual work – time is money. The deep compositing gets you past a lot of that. It’s not free. You pay the price in disk space.”
“We had these large scale huge soundstages full of greenscreen and the characters and then had to turn it into the Shatterdome,” says co-visual effects supervisor Eddie Pasquarello, who notes the space was akin to where they might build the space shuttle – but this time for multiple 25 storey-tall Jaegers.
Production built partial sets on the greenscreen Toronto stage which were then extended by the visual effects team at Base FX in China with ILM assets. Oftentimes, artists would also replace the existing floor to deal with reflections and make them look more soaked. For shots where actors would have to be looking or interacting with a computer generated effect, ILM opted for a relatively low-tech solution. “We would take a laser pointer and say ‘Look here!’, says DeQuattro. “We knew approximately how tall the Jaegers were so with some quick calculations on set we could figure out the right angle.”
However, occasionally an augmented reality tool on an iPad was also employed to help nail down shots of wide environments, such as the mammoth Shatterdome. “We used rendered panoramas of what the virtual environment looks like from the center of the stage where we were,” says Knoll. “Then we had a gyro enable panorama viewer that we would just have on my phone or on my iPad.”
ILM also took from the on-set filming camera data from the RED EPICs and all the preferred lenses and shooting style of del Toro and DOP Guillermo Navarro, and then would often replicate these in CG moves. “Guillermo del Toro has certain camera moves he loves such as a PIJU – a push in and jib up – which is his favorite camera move,” relays DeQuattro. “He is a big fan of indicating the camera operator – he really likes the sense that you’re trying to find the subject, you’re changing focus during the shot, that you’re doing something that indicates there’s a man behind the camera. So we do that with the digital camera so they have the same feel.”
Rodeo FX, in cooperation with ILM, handled the Conn-Pod interiors that combined the practical work with CG additions. Also in collaboration with ILM, Hybride created the holographic projections for the Jaeger cockpits and hand disk and arm grid holograms around the pilot suits. For the film, Rodeo also delivered an alien brain – the studio’s first organic creature – as well as matte painting and rain sim work for environments of Hong Kong including the helipad and Shatterdome. And Hybride crafted holographic projections inside control rooms (Loccents) and on various monitors throughout the film.
Above: see how Legacy Effects built the Conn-Podd mechanics for use on set.
The studio also produced shots of soccer stadium seats being ripped apart when a Jaeger crashes into a stadium. For this, quarter scale seats were blown apart using air canons. 32TEN’s work extended also to the creation of dust clouds, breaking glass and water effects used by ILM to comp into the film’s massive destruction scenes.
Above: watch the main-on-end titles by Imaginary Forces.
“The main principle was to focus on the Jaegers and Kaiju in a way that allowed the audience to take a breath and marvel at them in a way we couldn’t during the breathtaking, action-packed fights,” says Summers. “After 2 plus hours, the audience should be exhausted and need a breather.”
IF realized several style frames ranging from schematics to anime inspired designs to full particle-based procedural solutions. “In my opinion, the genius behind Miguel’s approach is that he always designs to the brief in terms of limitations on time or budget, but always finds a way to reach beyond those limitations creatively,” adds Summers.
CINEMA 4D was the tool of choice to create the main-on-ends with outputting of 30 title cards at 2K and a production time of 6 weeks (this was done by two full time artists, an intern and a Flame artist). “We carved out a couple days at the very beginning trying to optimize one of the scenes we used for our style frames,” says Summers. “We made a rule that no frame could take more than 5 minutes on our render farm in order to make sure we could deliver the flat version on time and save ourselves a couple weeks to figure out how to deliver a stereoscopic job. We hadn’t done a stereo delivery with our little team up until now.”
“Thankfully we were able to build out some render presets that let us get really close to that rule, as well as a super fast and dirty preset that let us test our ‘lighting’,” adds Summers. This was almost entirely based on just reflections, as Summers explains. “We ended up turning all of C4D’s whiz-bang features off: no GI or AO, no motion blur or depth of field. We aimed to address as much as possible in post, at both the After Effects stage and with some of the great tools Flame has to offer. We took the rare tactic of folding our in-house Flame artist into the conversation from the pitch stage, and the returns were tenfold. We ended up rendering with slightly lower anti-aliasing setting then I’m normally comfortable with and sent out tons of object buffer passes to drive frame-averaging and various blurs in the Flame to save time on our renders. A well-prepped Flame artist can work small miracles!”
To create the robots and monsters, Imaginary Forces received models direct from ILM to work with in C4D. “We started with their previs models, heavily leveraging C4D’s Motion Camera and Camera Morph tags to rapidly previs out ideas,” says Summers. “Each artist shot tons of ‘coverage’ for each Kaiju and Jaeger, handing off a bunch of QTs to our in-house editor. Tons of fun was had matching the appropriate title card to the perfect mecha or monster. We were particularly proud to get Hal Hickel and John Knoll’s credits in our San Francisco scene.”
For stereo, IF continued to make tweaks right through the C4D – After Effects – Flame workflow. “We made a ton of late night runs between our interlaced stereo monitor in our Flame suite and our anaglyph setups in the C4D viewport,” recalls Summers. “There were many nights where we were running around with 3 pairs of glasses stacked on our foreheads! We got lucky that we hammered out a solid game plan early on: all kaiju/jaegars would take place in aquarium space, and all of our onscreen titles would live slightly past the zero parallax plane into audience space. We ended up making changes to many of the flat shots for stereo, as some of our heavy macroscopic frames were difficult to read with such intense DoF.”
Pacific Rim’s ‘war room’ main titles – designed and animated by Miguel Lee – turned out to be a late decision by the director after he had seen IF’s pitches for the main-on-ends. “After he saw the frames that eventually ended up as the main-on-end titles – which he lovingly referred to as the ‘Sexy Robots’ look, by the way – he started flipping through the rest of the book really quickly,” outlines Summers. “All the way up until he saw the war room images.
Above: watch the main titles.
Working with Guillermo del Toro was probably one of the most exciting parts of the project for the IF team. “He knew precisely what he wanted, but always looked to the room for new ideas,” says Summers. “On our final delivery day for our 2D delivery, he stopped by while we were checking out colors in the booth. After watching the titles through for the first time, he actually asked if he could have permission to make notes. I’ll never forget that. His two notes were spot on, and we happily made the changes for the better. The guy has razor sharp eyes.”
All images and clips copyright 2013 Warner Bros. Pictures.
“The battle scenes that we tackled are all quite complex in just the recipe that is required to make a shot,” notes senior ILM visual effects supervisor, and now chief creative officer, John Knoll, “so when we’re in the ocean, there’s the robot and the monster and it’s raining, with rain striking the creature, being flung off the creature and cascades of water. The character’s disturbing the ocean, the foam, the sub-surface bubbles and mist and there’s a cocktail of many, many items that all have to be balanced just right to get a shot done.”
We explore ILM’s work for Pacific Rim. The studio was aided in its efforts by Ghost FX, Hybride, Rodeo FX and Base FX. Plus we highlight practical and model work by Legacy Effects and 32TEN Studios, the main title and main on end title design by Imaginary Forces, and the stereo conversion by Stereo D. In addition, Mirada produced the film’s two-minute prologue made up of documentary-style footage and VFX.
Above: watch a featurette on the VFX work.
Giant freakin’ robots and killer Kaiju
Twenty-five storey tall robots brought with them, of course, several challenges. Animation, scale, weight, interaction, rendering – ILM had to solve all of these in order to convince audiences that the Jaegers could move in the dynamic way that they do in battle. “Some of the bigger challenges with these battles is that the accurate physics would dictate that everything would move slower,” says Knoll, adding that del Toro did not want the action to feel too slow. “He wanted to hypothesize that the mode of force behind these robots was strong enough that they could push through all that air drag, and that it’s OK to push it through a little faster.”“There’s also then figuring out the scale issues,” adds ILM animation supervisor Hal Hickel, “figuring out how fast can we move. We didn’t want everything to feel like it was underwater or in super slow motion – typically you go very slow trying to sell that scale, but then all the fights are happening this fast. We couldn’t have that. We had to figure out how fast to go and still have it feel big.”
“And then on top of all that was how to invest in a sense of it being a machine,” continues Hickel. “Because we didn’t want to use motion capture – we talked about it at the beginning of the project and motion capture’s a great tool for certain things, but for this project, particularly with the Jaegers, we just really wanted to make sure it wasn’t completely fluid or organic, that it felt robotic.”
Hickel had time to complete a few test animations for del Toro to establish the look of the colossal robots. “I just had an early version of Gipsy walking down the street with some attitude,” he says, “partly to play with attitude but also play with speed and what are cool looking shots to help sell size. So I had her walking over camera and then with a camera tracking perpendicular to the line of action and tracking along with buildings in the foreground.”
Pacific Rim at SIGGRAPH
One of the main production sessions at SIGGRAPH 2013 will certainly be Cancel the Apocalypse – Industrial Light & Magic Presents: The Visual Effects of Pacific Rim. It’s on Tuesday, July 23, 2013 from 9:00am to 10:30am. Presenters John Knoll and Hal Hickel are sure to show some impressive behind the scenes material from the film.
The animators worked in Maya and not only had to take note of the
hugely complex moving parts of the Jaegers, but also had to be mindful
of the ocean they might be standing waist-deep in, or the buildings they
were ‘crunching’. Interestingly, occasionally their work would be
completely masked by a huge splash of whitewater during the fight –
something that was only apparent once simulations had been completed.
Another issue was getting the characters to read at night. “We had this
great idea of using the helicopters circling around all the time and you
see through the shafts of light cutting through the white water and
silhouetting the characters,” says Hickel.One of the main production sessions at SIGGRAPH 2013 will certainly be Cancel the Apocalypse – Industrial Light & Magic Presents: The Visual Effects of Pacific Rim. It’s on Tuesday, July 23, 2013 from 9:00am to 10:30am. Presenters John Knoll and Hal Hickel are sure to show some impressive behind the scenes material from the film.
As the battles rage in Pacific Rim, ILM also had to deal with progressive destruction on the Jaegers. “For Gipsy, we had versions A, B, C, D and each of those had a 1, 2, 3, 4,” explains co-visual effects supervisor Lindy DeQuattro. “I think we ended up with about 20 different versions of Gipsy. After each battle she would have new scuffs, dents and scratches, burns. Then she’d go in for repairs and get an arm replaced.”
Like the Jaegers, the Kaiju monsters came in several different forms. That meant that ILM also had to look at various reference for the creatures – from crabs to gorillas, reptiles, bats, dinosaurs and, of course, previous Kaiju and monster movies. The organic nature of the Kaiju and their incredibly large muscles meant that the studio turned to shapes for animation rather than anything automatic or built into the rigging. Flesh sims for wobble and muscle tensing were added, however. “We chose to do that,” says Hickel, “because the Kaiju were so radically different from one another – it didn’t seem economical for us to set up an overall muscle system for the show because it didn’t seem portable from one to the next. I thought, we don’t need a huge amount of muscle detail – we need large masses that do certain things at hero moments, so to me that said shapes.”
Much of these battles occur in water, either waist-deep to the creatures or completely submerged. For fluid simulations, ILM relied on its existing proprietary toolkit, but then also moved into Houdini for elements like cascading water from the characters and for particulates. “We use Pyro for some of that too, and Plume,” says DeQuattro. “We have a variety of tools that we can use depending on what we’re trying to achieve.”
For more on Arnold and the latest in other renderers, check out fxguide’s State of Rendering coverage – Part 1 and Part 2.
To help show the detail of the Jaegers and Kaiju, ILM elected to go
with Arnold as its primary renderer for the robots and monsters (V-Ray
was also used to help achieve environments and RenderMan remained the
main renderer for water). Solid Angle founder and CEO Marcos Fajardo
helped ILM with the Arnold transition. “Pacific Rim was an
incredibly complex project to work on,” he says. “The amount of
geometric detail, the amount of texture detail – some of the shaders
have 2,000 nodes and it was really difficult to render those shots. We
spent some time with ILM on site optimizing the render, seeing what the
hot spots were and improving the renderer. As a result, all of our
customers will benefit from this optimization. Things like texture
threading – very important for Pacific Rim. It was just crazily complex and we just had to double our efforts to optimize rendering for that movie.”Katana was another tool ILM more fully embraced after experimenting with it on Mission: Impossible – Ghost Protocol, where Arnold had also been used to some degree. “The greatest thing about Katana,” says DeQuattro, “is that we could really light multiple shots from one scene file, which is not something we were previously doing. It made it easy for say a sequence supervisor to really light just about every shot in the sequence. We had some scenes where the sequence supervisor lit all the shots out of one scene file.”
Destroying cities
If giant robots and monsters were not enough, then recreating cities – and destroying them – became additional challenges. For Hong Kong, which bears the brunt of a fight mid-town and in its docks area, ILM scouted the city and shot moving footage and stills. “We selected a lot of streets that were the basis for where the fight was going to travel along,” explains DeQuattro. “And we had to widen the street because the creatures are so big they don’t actually fit between the high rises!”There were then two approaches to modeling CG buildings and architecture. If the buildings were not going to be damaged, then ILM called on its digital matte group. But if destruction work was required as, say Gipsy Danger and a Kaiju rampage through Hong Kong, then the studio entrusted its ‘DMS’ pipeline – model, paint, simulation, destruction.
For the first time, ILM relied on Houdini for destruction effects rather than its traditional internal tools. “In Houdini you want to approach things as procedurally as possible,” says DeQuattro, “so you make definitions of how a pane of glass will break, how a piece of metal will break, how a piece of cement will break, and then name things appropriately so they will be dealt with accordingly inside of Houdini. We had to think about things from the beginning as we were modeling them. How do we want this to break? What material is this made out of? And make sure it was put together correctly.”
Deep compositing in Nuke, another tech adopted by ILM on a larger scale for this show, became crucial in working with the Jaeger/Kaiju fights and the resulting destruction. “It helps solve a problem that’s always existed with volumetrics and motion blur holdouts,” notes Knoll. “I’m really familiar in previous shows of having to do two renders – a hold-out render and a non-hold out render. You always end up with funny edges around your object that you have to split in the non hold out version. There’s a lot of manual work – time is money. The deep compositing gets you past a lot of that. It’s not free. You pay the price in disk space.”
Re-creating locations
While many sequences wound up being completely computer generated, a major effort involved the collection of on-set camera data, photo reference and other reference for live-action sequences requiring visual effects and to inform the CG creations. The ILM team oversaw LIDAR set scans, HDRi capture and detailed photo modelling of props and set pieces. “We’d do two different levels of this,” says DeQuattro, “a high angle and low angle and then we’ll go around in a circle and take say eight photos from different angles. It allows us to do a rough reconstruction of a model that we can use for layout purposes.”“We had these large scale huge soundstages full of greenscreen and the characters and then had to turn it into the Shatterdome,” says co-visual effects supervisor Eddie Pasquarello, who notes the space was akin to where they might build the space shuttle – but this time for multiple 25 storey-tall Jaegers.
Production built partial sets on the greenscreen Toronto stage which were then extended by the visual effects team at Base FX in China with ILM assets. Oftentimes, artists would also replace the existing floor to deal with reflections and make them look more soaked. For shots where actors would have to be looking or interacting with a computer generated effect, ILM opted for a relatively low-tech solution. “We would take a laser pointer and say ‘Look here!’, says DeQuattro. “We knew approximately how tall the Jaegers were so with some quick calculations on set we could figure out the right angle.”
However, occasionally an augmented reality tool on an iPad was also employed to help nail down shots of wide environments, such as the mammoth Shatterdome. “We used rendered panoramas of what the virtual environment looks like from the center of the stage where we were,” says Knoll. “Then we had a gyro enable panorama viewer that we would just have on my phone or on my iPad.”
ILM also took from the on-set filming camera data from the RED EPICs and all the preferred lenses and shooting style of del Toro and DOP Guillermo Navarro, and then would often replicate these in CG moves. “Guillermo del Toro has certain camera moves he loves such as a PIJU – a push in and jib up – which is his favorite camera move,” relays DeQuattro. “He is a big fan of indicating the camera operator – he really likes the sense that you’re trying to find the subject, you’re changing focus during the shot, that you’re doing something that indicates there’s a man behind the camera. So we do that with the digital camera so they have the same feel.”
How to operate a Jaeger
Inside the Jaeger heads – known as Conn-Pods – two pilots control the robot via ‘drifting’ in which their minds are locked in a neural bridge. They can then operate the Jaeger by physically performing the required actions while connected to each other and the robot itself via mechanical ‘stilts’. Production filmed the pilot actors on a gimbal-operated set that was partially fitted out with interiors, cables et cetera but also had interactive water, sparks and movement filmed in live action. Legacy Effects was responsible for the practical Conn-Pod machinery of pull levers, springs and metal parts. The studio also fashioned pilot suits and helmets. Extra moving machinery and holograms would be added in via visual effects.Rodeo FX, in cooperation with ILM, handled the Conn-Pod interiors that combined the practical work with CG additions. Also in collaboration with ILM, Hybride created the holographic projections for the Jaeger cockpits and hand disk and arm grid holograms around the pilot suits. For the film, Rodeo also delivered an alien brain – the studio’s first organic creature – as well as matte painting and rain sim work for environments of Hong Kong including the helipad and Shatterdome. And Hybride crafted holographic projections inside control rooms (Loccents) and on various monitors throughout the film.
Above: see how Legacy Effects built the Conn-Podd mechanics for use on set.
Practically speaking
32TEN Studios, working with ILM, contributed several practical effects too. For one sequence in which the fist of a Jaeger comes through an office building, 32TEN dressed an area with office cubicles at quarter scale, shooting with RED EPICS on a 3D rig.The studio also produced shots of soccer stadium seats being ripped apart when a Jaeger crashes into a stadium. For this, quarter scale seats were blown apart using air canons. 32TEN’s work extended also to the creation of dust clouds, breaking glass and water effects used by ILM to comp into the film’s massive destruction scenes.
Battles in stereo
Stereo D converted over 1900 shots in Pacific Rim to 3D, oftentimes relying on multiple elements from ILM to make up the shots. ILM also completed several sequences in full stereo itself. John Knoll notes that the style of the film allows for some natural stereo moments to occur, such as “debris from an impact or as the characters are fighting each other,” but that the style of the battles with ‘human type’ coverage meant that foreground objects were often needed to show dynamic range. However, he says, when the characters are in the ocean, the artists could engineer some of the water coming off them towards the camera to make better stereo moments.An epic end
Pacific Rim concludes with a rousing main-on-end titles sequence created by Imaginary Forces featuring macro shots of Jaegers and Kaiju in action as well as iconography from the film. Creative director Miguel Lee oversaw the work of a small team that included designer Ryan Summers. We talked to Summers about the main-on-ends and the main titles also produced by Imaginary Forces.Above: watch the main-on-end titles by Imaginary Forces.
“The main principle was to focus on the Jaegers and Kaiju in a way that allowed the audience to take a breath and marvel at them in a way we couldn’t during the breathtaking, action-packed fights,” says Summers. “After 2 plus hours, the audience should be exhausted and need a breather.”
IF realized several style frames ranging from schematics to anime inspired designs to full particle-based procedural solutions. “In my opinion, the genius behind Miguel’s approach is that he always designs to the brief in terms of limitations on time or budget, but always finds a way to reach beyond those limitations creatively,” adds Summers.
CINEMA 4D was the tool of choice to create the main-on-ends with outputting of 30 title cards at 2K and a production time of 6 weeks (this was done by two full time artists, an intern and a Flame artist). “We carved out a couple days at the very beginning trying to optimize one of the scenes we used for our style frames,” says Summers. “We made a rule that no frame could take more than 5 minutes on our render farm in order to make sure we could deliver the flat version on time and save ourselves a couple weeks to figure out how to deliver a stereoscopic job. We hadn’t done a stereo delivery with our little team up until now.”
“Thankfully we were able to build out some render presets that let us get really close to that rule, as well as a super fast and dirty preset that let us test our ‘lighting’,” adds Summers. This was almost entirely based on just reflections, as Summers explains. “We ended up turning all of C4D’s whiz-bang features off: no GI or AO, no motion blur or depth of field. We aimed to address as much as possible in post, at both the After Effects stage and with some of the great tools Flame has to offer. We took the rare tactic of folding our in-house Flame artist into the conversation from the pitch stage, and the returns were tenfold. We ended up rendering with slightly lower anti-aliasing setting then I’m normally comfortable with and sent out tons of object buffer passes to drive frame-averaging and various blurs in the Flame to save time on our renders. A well-prepped Flame artist can work small miracles!”
To create the robots and monsters, Imaginary Forces received models direct from ILM to work with in C4D. “We started with their previs models, heavily leveraging C4D’s Motion Camera and Camera Morph tags to rapidly previs out ideas,” says Summers. “Each artist shot tons of ‘coverage’ for each Kaiju and Jaeger, handing off a bunch of QTs to our in-house editor. Tons of fun was had matching the appropriate title card to the perfect mecha or monster. We were particularly proud to get Hal Hickel and John Knoll’s credits in our San Francisco scene.”
For stereo, IF continued to make tweaks right through the C4D – After Effects – Flame workflow. “We made a ton of late night runs between our interlaced stereo monitor in our Flame suite and our anaglyph setups in the C4D viewport,” recalls Summers. “There were many nights where we were running around with 3 pairs of glasses stacked on our foreheads! We got lucky that we hammered out a solid game plan early on: all kaiju/jaegars would take place in aquarium space, and all of our onscreen titles would live slightly past the zero parallax plane into audience space. We ended up making changes to many of the flat shots for stereo, as some of our heavy macroscopic frames were difficult to read with such intense DoF.”
Pacific Rim’s ‘war room’ main titles – designed and animated by Miguel Lee – turned out to be a late decision by the director after he had seen IF’s pitches for the main-on-ends. “After he saw the frames that eventually ended up as the main-on-end titles – which he lovingly referred to as the ‘Sexy Robots’ look, by the way – he started flipping through the rest of the book really quickly,” outlines Summers. “All the way up until he saw the war room images.
Above: watch the main titles.
Working with Guillermo del Toro was probably one of the most exciting parts of the project for the IF team. “He knew precisely what he wanted, but always looked to the room for new ideas,” says Summers. “On our final delivery day for our 2D delivery, he stopped by while we were checking out colors in the booth. After watching the titles through for the first time, he actually asked if he could have permission to make notes. I’ll never forget that. His two notes were spot on, and we happily made the changes for the better. The guy has razor sharp eyes.”
All images and clips copyright 2013 Warner Bros. Pictures.
Subscribe to:
Posts (Atom)