Life stories: the Glimpse Renderer – part 1

In my previous post Life stories: From Art to Tech, part 2 you have followed me through part of my journey from small shop, small side-gig, to deploy big transformational projects at large studios. From a technical perspective the biggest single-handed project had been the creation of a production path-tracer renderer: Glimpse. Compared to my current work on RenderMan, which is curated by many hard-core rendering pro, Glimpse came together from two unlikely underdogs who pulled off a seemingly impossible task: creating a new renderer from scratch and swapping it out mid-production… what is also referred as “rebuilding the rocket engines mid-flight”, this is what make the project so special to me.

If you have seen any of the movies in “The Lego Movie” saga, or any of the movies Animal Logic have contributed to after that, then you have seen pixels produced by the Glimpse renderer (of course I am referring to the shots and VFX that the studio had produced, since the typical VFX project is assembled from contributions by several shops).

There was a time when people were writing shaders. Yes, they opened a text editor and start typing. They opened the terminal, run a command to compile the shader, run another command to render a frame and see what the result looked like. Programmable shaders had been introduced by Pixar in the RenderMan Interface specification revision 3.0, published in 1988. Programmable shading in real-time graphics appeared around 2001. Don’t worry I am not going to let this story begin at the epoch. I wanted just to mention that studios employed highly technical professionals such as “Shader Writers” whose main job was to write and maintain shaders for production. These folks were very rare in the industry. You may have had a lighting department of 80 lighters and only 1 or 2 shader writers. Production renderers such as RenderMan didn’t provide any production standard shaders as part of the package. RenderMan was a platform where to build a pipeline upon. Studios were expected to write their own shaders.

Animal Logic chose the approach of shader code generators, a process of converting more user friendly shading network into RSL shader code. Code generators were a popular alternative to write custom shaders by hand. There were 2 competing tech at Animal Logic when I joined, one was MayaMan transmogrifying Maya shading graphs into RSL, the other was a scripted shading composition system capable of shader layering, which name I have unfortunately forgotten. This second system appeared cool at first but it had lots and lots of stability issues resulting in manual labor in particular related to AOVs support. This caused a volume of production inefficiencies in lighting. Guess what… during production I grew a better idea! After Happy Feet (circa 2007) I created a prototype of shader layering system with automatic AOV management (among rendering engineers, who hasn’t?). Back then, at the studio, I was only known to be a good lighter with a good eye for what looks right plus some scripting skills. People didn’t know about my coding sprees. I pitched the project to R&D, unsurprisingly it was rejected, the previous system had been a large investment and in a studio where there is a lot of artists churn, it is inevitable to turn a bit numb at drum of new proposals from recently hired artists, everyone comes in with a proposal for how certain things were done better at another studio… it’s normal, it’s part of the game. Not satisfied I pitched the project to surfacing artists who really appreciated the potentials. Sometimes you need to find allies who make noise for you.

The dev project got green lit, after some struggle with resourcing in R&D I got moved from the Lighting department to R&D to implement the system. About 6 months of work. I did implement most of the shading nodes, some of the math taken from the old system, and a brand new UI (I wish I had screenshot for those, they were pretty cool…). The shading system was named “Impasto”. Don’t look at me! Unfortunately I didn’t get to pick the name, in spite it was my concept and my implementation… But hey, if I need to give up on something, let it be the name! The outcome of the project was far from perfect but it was a success. It fixed the problems it was designed to fix. At completion I went back to the lighting floor, to lead a team for the production of “Legend of the Guardians, the Owls of Ga’Hoole”… Gosh, somebody has to really do something about these unnecessarily long movie titles! Let’s call it “Guardians”, shall we?

Guardians was still early in pre-production, so given my newly proven skill as shader writer, my hat got thrown back in the ring to write specialized shading models for eyes and feathers, along with a pile of other stuff. Be careful when you do a good job at something, certain things will stick to you.

It was a time when writing a shader was much more of a craft than it was a science. Mix two parts of dot products, 1 part of trig-identities, some drops of color science, shake well and you have a nice cocktail that may survive production. When something works visually, under a variety of lighting conditions, anything goes even if the math is not grounded on anything. It was much different than nowadays. If you don’t know what I am talking about, check this talks Siggraph 2013 PBR course notes: Everything You Always Wanted to Know About mia_material (*but you were too afraid to ask)

Leading lighting on Guardians (2008-2010) had been very rewarding. Pre-production and production for me continued for a good three years, but something that became apparent in that effort was how we needed a system to re-apply look changes to shots. With so much manual labor it was tedious and laborious to keep look consistency shot-to-shot . So in my own time I started toying around the concept of a “Session Recorder”. Please stay with me, in this story I only mention parts that are key contributors to Glimpse.

The Session Recorder was supposed to snapshot different state of the scene and keep re-playable deltas of what an artist did in Maya. At a click of button they could go back and forth between an entire session of changes, or bounce between several variations of it, they could export the minimum generalized delta and reapply it to a different shot… Look files? Sounds familiar nowadays… The tech was based on a live bidirectional communication between an in-memory database and the Maya dependency graph. I did spend many months of toying around it (at night time) but I got the system to work for a decent part. I pitched the idea, but it got rejected by the supervisors, deemed too risky. I cannot blame them, the idea was quite out there. Still I did learn a lot about fast live data extraction from the Maya DG, something that will turn out to be fundamental later on.

Time passed, the movie delivered, demand for quality evolved: you couldn’t get away anymore with faking reflections with reflection occlusion. The industry was brewing the idea that physically based shaders and raytracing on the large scale were getting approachable. The math was not new, it was the non-applicability of it that was getting questioned. Me and the Head of Surfacing were big proponent of that trajectory of innovation. I got tasked the supervision of lighting for another long run animated feature, but pre-production for me would have started as a solo exploration of physically based shading to be deployed to production. Since I did create the previous shading system, naturally people looked at me to innovate it. This was 2011. So my second complete rewrite of the shading system began: PHX (read Phys-X).

RenderMan in early 2011 didn’t have a physically based shading system. Features for that got introduced later that year with R16. I already had most of my tech up and running by the time the existence of the planned new RenderMan features became known to us. After a thorough evaluation we decided that my tech was more complete and versatile, so we did stick with my implementation for production.

Engineering the PHX shading system had been a deep dive in statistics and numerical methods, something at the time I had only a superficial knowledge of. In a few months I had to go from zero to practitioner-level proficiency. If you think about it it’s like passing a college exam every time, at least this is how I picture it in my mind, since I haven’t had the opportunity to go to college and I will never know for sure. In my case I don’t even know what the subject of study is, I need to figure it out in order to solve the problem. A new subject, I need to study, the technology I implement is the proof I passed the exam. There is no failure option since production is literally counting the days for that tech to come online, and production ramp-up is scheduled: artists are hired months in advance to join in at a staggered schedule. They need to be trained and be productive by a certain day, they need to meet a shot quota by a certain day to meet strict deadlines, the tech must be ready. In this journey you are in your own, there is no professor or advisor to guide you. We had meeting with the crew and TDs to decide how to finesse features, but for the most part I had to figure out in my own.

PHX was controversial, but never the less a success. Lookdev and lighting became much simpler. The quality was up, more than a couple of notches. Some shots were looking great by just applying the master lighting. It wasn’t magic: physics and Monte Carlo made a huge difference to productions. Final frame render time went up, but overall farm usage staid leveled. We were rendering longer per frame, but we were nailing the look on many fewer revisions. Also, it was trivial to obtain cheap passes for preview render, just render with fewer samples, that’s all. Productivity went up, cost of lighting overall on shows went down.

The PHX architecture was quite cool, shaders were constructed by layering lobes while there was a unified sampling strategy governed by the “Sampling Engine”: only later I got to know there was a name for such thing: the “integrator”. The sampling engine was a co-shader and it was capable of adaptive rendering by doing variance estimation at the REYES grid level while refining the sampling as needed.

The new physically based lights had light filter stacks. Shadow calculation was provided in the form of light filters to pile on the stack, shadow maps and raytraced shadows could be easily combined by partitioning the scene across. Shadow maps had new auto tracking features, giving an end to time consuming manual placement and fitting (something shot lighters had to do up to that point). Light filters included rods and blockers, filters were additive or subtractive so you could do basic boolean operations between shadow operators and create weird effects and cheats. This was all new, lighters at the studio never had this much controls over the look and achieve that directly in render. Remember, this was 2011, relatively ahead of the curve. Rendering was slow though. And worse of all, the new shading system used an enormous amount of memory due to the REYES shading architecture and large arrays of samples to be stored per point in shading grids.

It was somewhere at that time that Intel released Embree as a tech demo to demonstrate raytracing could be fast on CPUs by using SSE instructions. You all probably remember the interactive rendering of the Imperial Crown of Austria. Embree was not a ray-tracing library back then, it was a polished tech demo punching with extreme educational value. I thought, I already had a rather sophisticated physically based shading system where I did figure out a lot of the math for a renderer. Imagine if I could combine the artist flexibility and production sophistication of the PHX system with some super fast ray-tracer like that in Embree! Just imagine…

Embree was mind blowing to me. It wasn’t the first demo of its kind, but it was the first one to make available source code for it. I can learn a lot of things in my own, but coming up with a raytracer that is that efficient, starting from zero is not a part-time job, it’s a career choice, like it had been for the amazing researchers behind Embree. Some people have dedicated their whole career to that and I am just a human.

Once more I lifted the hood to take a peek inside. It wasn’t a Phong shader this time, it was a raytracer. I was supervising lighting though, but still in pre-production… I had some energy to spare. I dismembered the Embree engine. I didn’t want the entire demo code, I just wanted to learn how to write a raytracer that was that efficient. If I have to write an entire render engine it will be the engine that I always wanted to use:

  • It must be fully interactive.
  • I want to be able to modify geometry, shaders, lighting without ever restarting the render.
  • It must work similarly to a viewport, I want to start the renderer on a completely empty scene and build my scene from there without having to think the renderer is running.
  • The engine must be simple to use. Only a handful of controls to govern its behavior, no more shadow maps, or special setting to shadow hair, or a plethora of cache sizes to tune.
  • The renderer must be FAST, no pixelated preview.
  • The renderer memory footprint must be very small and scale to insane complexity, because what is insane today will be just the production norm tomorrow.
  • The renderer must be fully deterministic.
  • The renderer will be based on sampling strategy where everything is inexpensive at the individual sample while converging to the correct result.
  • The renderer will implement techniques for dimensionality reduction to replace searches with direct access of data.

This simplified list looks like a wish list, but the worst thing you can do when developing something new is to not be sure about the feature set. By that time I had been a 3D artist and a lighter for almost two decades, a prolific one. I did spend long time learning render engines from the outside in and instructed-guessing the tech in order to optimize render time. I was pretty confident about what I wanted from a renderer and I was getting confident I could pull that off.

I took the core of the Session Recorder. It was a scene translator after all and it had an in memory scene graph to buffer and mirror the scene data. I pruned all the data model for the in memory multi-snapshots, but I kept the bidirectional communication with the Maya DG. I took the Embree math library as-is, I thought it was excellent and I couldn’t have done any better. I started reading all the papers I could find about ray-tracing acceleration structures to connect the dots between the theory and practical implementation I was seeing in Embree. I said this already, to me it is not enough to learn how to do something, I need to learn deeply how that something really works, I need to internalize concepts so that I can re-formulate them the way they need to be, given the requisites and challenges ahead. Again it was, from zero to expert level, this time on raytracing.

I started rewriting the raytracer. Embree 1.0 was a single level BVH based on a “triangle soup”, it had a limit of 24 million triangles for the scene total. It was using a huge amount of memory for that little geometry capacity. But it was really fast! I worked on multi-level BVH to support instancing and object level minimal BVH rebuilds. I worked on motion-blur, I worked on visibility-pruned traversal and other techniques I didn’t read about but I could come up with. I worked to obtain a much smaller memory footprint, without getting in the way of the speed. In 6 months I implemented a fair bit of it, months of night time work to get first light. You can see an early video here running on a 2007 dual core Mac Book Pro (hear that cooling fan spinning!!).

Imagine that running on a 24 cores workstation. We started using the engine for sketching lighting in my department. It was nothing short of revolutionary at the time. The new embree-inspired raytracer, underpinning an initial C++ port of the PHX engine was really getting somewhere. I had feature parity of the lighting subsystem very early on, while the materials were not much more than a diffuse BSDF. My lighting crew loved it. Back then I asked if they could provide me some quotes in case one day they’d turn useful. Here are two, just for fun:

“Glimpse opened the door to interactively light the most complex scenes. From several high-res characters up to entire forests, Glimpse always maintained its interactivity. What used to be a time consuming process in the past became lighting in real time. With Glimpse lighting has never been more creative before. Its simply amazing.” – Herbert Heische

“With Glimpse I was able to be given a creative note at 5:45pm, fire up Glimpse and align the light (and resultant shadow) quickly, review it with my team leader and have the frame running in stereo 1’s by 6:05. Without Glimpse that would have been a very long night.” – Mathew Machereth

This is how Glimpse came to be. Taking a “glimpse” at the final lighting is what we were doing, hence the name.

In part 2 of this posts mini series I will tell about what happened next, how Glimpse solved the production crisis for “The LEGO Movie”. There is likely going to be a part 3, from “The LEGO Movie” to a fully featured production renderer.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: