Friday, December 14, 2007

Linear Workflow Introduction

This is the first post in what may end up being a series. There's a lot to cover on the topic, but not so much I can't stuff it into a blog. Lets get started.

You probably know what the idea of a linear work flow is already, but for the sake of those that do not, here is a very brief explanation:

A linear work flow exploits the fact that your renderer works internally in float (linear) space. It generates data that 8-bit output clips away, as it is gamma encoded for monitor display. What a shame. That data is very useful in your post-production compositing and color adjusts.

If you are not familiar with the differences between 8-bit, and 32-bit (float) images, or the concept of gamma encoding for display, then you may want to study a bit before progressing. There are some great books out there to get you up to speed. The HDRI Handbook is very well written, and I would recommend it highly. Online, there are a ton of sites that discuss hdri, which in computer graphics, was the beginning of the linear work flow concept. Learn about hdri, and you'll have a much better grasp of the process.

You may wonder what the big deal is, since you've been rendering wonderful images in 8-bit for years. The big deal I suppose, comes down to a few major issues:

1. Physical accuracy
You can be as physically accurate as you want (or have the patience to deal with); all the way to real-world candelas.
2. Realistic lighting
This is a lengthy issue, but lights in CG have traditionally been "cheated" via a linear falloff, or no falloff at all. This is because the linear response of lights are not being properly gamma adjusted in normal work flows. This goes all the way back to how phong and blinn work as estimations of real lighting response. We can and should evolve beyond that with realistic materials that respond to light properly. This is the reason there are so many new MR (architectural) shaders. They are built to respond correctly in a linear work flow.
3. Greater adjustability in post
32float or 16float has the ability to deal with huge color and exposure adjustments. Given a true float compositing environment like Fusion, most filters you use will respond more realistically. Motion blurs, and glows, for example, will behave in a more natural and photographic way.

Others have posted long before this, on the process, but I have found specific information out there lacking, other than in books. So I'd like to discuss it a bit further. One great post that started as a vray-specific tutorial, has slightly expanded to mention maya/MR and others. Thanks to Robert Nederhorst for this link. It also discusses gamma, so if you're not familiar enough with that Greek letter, read this.

Sunday, December 2, 2007

motrMatteShadows Beta

So Jeff finished an internal tool for MR in Maya. It assists with using the Matte Shadow Production shader in Maya2008. To be specific, this tool automates the process outlined in Zap's Production Shader Examples. Read Zap's post to get familiar with the process. He basically outlines 2 modes of use. First, he renders the BG Plate in the shot, which we can describe as a LookDev mode. Next, some connection switching will allow for a final renderPass mode, where the BG plate is omitted. This is to generate output for later compositing.

The script below is a tool to generate and connect the required nodes automatically (setup), then easily switch between the lookDev and renderPass mode on the fly. It's not fancy, but it does the job. Also not a lot of error-checking. There is some, but if you delete the nodes it creates, the toggle will stop working. Hey, it's an early version;)

Remember, you will first need some things:
Maya2008, and you should unhide the production shaders via this cgtalk post. Otherwise, you may get missing node errors. Once you've got them unhidden, this script becomes useful. Use by placing the script into an active scripts directory on your system and typing in the command line, "motrMatteShadows 1;" 1 is for setup and lookDev mode. For the renderPass mode, type again, with a "2". I suggest creating 2 bu tons on your shelf for 1 and 2.

The first time you run it (in mode 1), it will create all the shading and camera nodes needed. Yes, it creates a new camera. Go ahead and graph it, and you will see the new connections. It also creates a matteShadow material. You should also graph that in Hypershade (and it's ShadingGroup).

While this early script creates a bunch of nodes for you, and toggles between 2 setups, it doesn't fill in all the blanks yet. We will try to work on that a bit more later. For now, you'll need to do the following steps yourself... Don't worry, they're easy:

1. After graphing the camera in Hypershade, you'll see 2 blank texture nodes. One pipes into mirrorball, and the other into cameramap. place the mirrorball image into the mirrorball texture, and your background plate into the cameramap texture. Since the same texture nodes are piped into your matteShadow material as well, this is all you really have to do.
2. You'll probably want to see the background plate in your viewport, so you'll have to make your own imagePlane for now. Use the same file you used for the cameramap texture. When rendering, you should set the alphaGain of the imagePlane to "0"

Set up your scene elements and do some test renders in mode 1. When you are happy with the overall look, set the script to mode 2 for rendering. You can toggle back and forth at any time to adjust with the bg plate in the render view.

There is a good deal to work on with this script yet. I'd like to start it with a UI, that asks right away for your mirrorball and bgPlate images. Also, it should have an option to auto-build your image plane. Perhaps in a bit. For now, I hope this is useful to someone, even in this rough state.

motrMatteShadows ver 0.3
(right click, save target as)