3D Workflow for Designers & Art Directors: Shading Networks

In the world of CG and VFX, one of the developmental processes is called “look dev”, or look development. The primary goal of look development is to ensure that both the lighting and the materials in the 3D scene match real-world lighting and materials. Especially if CG elements are to be composited with live-action elements in a shot. If the lighting and materials don’t look the same, you’ll never be able to sell the shot.

This means creating “photometric” lights (more on that in the “Lighting” article) and ensuring that the materials in the scene react to these lights just as their real-world counterparts would. The material that gets applied to a 3D object is called a “shader” or a “shader network”, depending on the complexity of the material.

WHAT ARE SHADERS?

Shaders are often referred to as “materials” because that’s what they represent: a real-world material like glass, steel, rubber, skin, etc. The thing to remember is that a shader is not, in and of itself, a material; to build a material, a shader must be created and it’s attributes set to reflect the appropriate material. A single type of shader can be set up to represent a variety of different materials.

The image below show several types of materials applied to a series of spheres. Each of the materials, except the “Auto Paint,” used the same type of shader.

materials

Front Row
Left: Rubber / Middle: Chrome / Right: Glass
Back Row
Left: Brushed Steel / Middle: Wood Planks / Right: Auto Paint

The shader creates the look of various materials by controlling a variety of properties, such as:

  • Diffusion
  • Color
  • Roughness
  • Reflectivity
  • Transparency
  • Translucency
  • Specular Highlights
  • Bump
  • Displacement

These are just a few, and each is controlled by a set of attributes. For example, the reflection property might be controlled by the following attributes:

  • Reflectivity
  • Reflectivity Color
  • Reflection Blurring
  • Sampling
  • Index of Refraction
  • BDRF (Bidirectional Reflectance Distribution Function)
  • Fresnel Effect
  • Anisotropy

Attributes can be set by entering a value, mapping a texture or by controlling it with some type of node. More about nodes in the “Shader Networks” section below.

TYPES OF SHADERS

Software Shaders

Most 3D applications come with their own set of default shaders which work well with their default rendering engine.  These default shaders usually include the old standards, shaders dating back the the beginning of 3D rendering:

  • Lambert (for materials with a flat surface and no reflections; like rubber, fabric, flat paint, etc.)
  • Blinn (for materials that are shiny and reflective)
  • Phong (same as Blinns, but they handle specular highlights and reflections differently)
  • Anisotropic (this material stretches the specular highlights, like on stainless steel)
  • Surface (light has no impact on this shader whatsoever, therefore, no surface modeling by light)

In the image below, these materials have been applied to a series of spheres:

default-shaders

Front Row
Left: Lambert / Middle: Blinn / Right: Phong
Back Row
Left: Anisotropic / Right: Surface

Although these shaders still get used a lot, they all have an inherent problem, which is that they don’t handle light accurately, resulting in blown-out highlights and reflections. When used with the default rendering engine, it can be very difficult, if not impossible, to create photorealistic materials.

Proprietary Shaders

These days, most high-end 3D apps come bundled with a 3rd-party rendering engine.  Most Autodesk products (Maya, 3ds Max, SoftImage) come with MentalRay, a product of a Swedish company called Mental Images.  Other 3rd party rendering engines include V-Ray and Maxwell Renderer. These renderers produce beautiful images and are often used in the movie industry.

Each of these renderers come with their own proprietary shaders, which cannot even be seen by any other renderer. These shaders are built to calculate light very accurately and will produce extremely realistic materials.

Since I use MentalRay exclusively, I’ll limit this discussion to those shaders. Some of the shaders that come with MentalRay are:

  • DGS Material (used for highly reflective materials)
  • Dielectric Materials (used for transparent and refractive materials, like glass & water)
  • Sub Surface Scatter Materials (for translucent materials like wax & jade)
  • Architectural & Design Material (technically known as “mia_material_x), which can be used to build almost any type of material)

These are but a few of the many, many types of shaders that come with MentalRay. I tend to use the Architectural & Design Material almost exclusively, since it can be used for almost any material; flat, shiny, refractive—it handles them all superbly. I also use the Sub Surface Scatter materials, which handle translucency much better. Most of these shaders don’t look like much by themselves, but when used as the foundation for a complex shader network, amazing results can be achieved.

SHADER NETWORKS

Most of the time, a single shader by itself cannot always achieve the desired result. This is when a number of “nodes” are connected together to create the final look. Many applications are node-based, such a Maya, Nuke and Shake (compositing software) and they all work much the same.

Nodes

What is a node? Good question, difficult to answer. A node is an element that stores data and can perform functions. For example, when an object, such as a sphere, is created in Maya it is built from several nodes. With a sphere, there’s a creation node that contains the options available at creation: the sphere’s radius, how many subdivisions around the axis and how many subdivisions in height. There’s also a transform node, which records how the object is moved, rotated and scaled. There’s a shape node that stores the position data for each vertex, edge and polygon in the object’s shape.

Each node has a number of inputs and outputs. Datum flows from the output of one node into the input of the next node, where it is altered by that node. For example, when an object is “instanced” (a type of copy that is always tied back to the original) all the instances have their own transform node, but share the creation node and shape node of the original. Therefore, each instance can be moved, rotated and scaled, but any changes to the original’s shape is reflected in all the instances. A single shape node sends data to all the transform nodes.

Some nodes perform functions, where the date that comes into the node through it’s input is transformed as a result of a calculation and the new data flows from the output. For example, a Gamma Correct node can take RGB data and apply a gamma correction to it, then send the corrected values to the next node.

Shaders are nodes. Each of the attributes has an input and an output. A “File” node can be used to import a texture file (.psd, .jpg, .bmp, .tga, etc.) which can then be plugged into the “color” attribute, the “reflectivity” attribute or many others. A file node (connected to a grayscale image) can also be plugged into a “bump2d” node, which is then plugged into a shader’s “Bump” attribute.

A Shader Network

The image below graphs a fairly complex shader network used to create a single material which simulates dusty glass. The grey boxes divide the nodes into functions; for example, the largest box controls how “bumpy” the surface of the glass is. Some nodes control color and transparency, some determine the texture of the dust, some determine which direction is “up” and send that date to a node which controls dust placement and some node mix all the results together. The node with the yellow box in the upper left hand corner is the shader node (which happens to be a blinn; it’s a really old network) and is the node applied to the “bottle” object. 45 nodes to simulate an old bottle of wine that’s collected a lot of dust. (This shader network was created by Jeremy Engleman of Gnomon Workshops.)

A complex shader network graphed out.

A complex shader network graphed out.

FOR THE DIRECTOR

As the person who is directing a 3D project, you have a couple of responsibilities where shaders are concerned. First, you need to collect info and resources about the materials. A list of all of the types of materials used in the construction of whatever objects are in the scene, and hopefully samples or photos of each of the materials, so that the 3D artist is familiar with the look you’re trying to achieve.

Second, you’re either the person who decides if the rendered materials look correct or the conduit through which the approval cycle flows. This can be a very iterative process, so you need a clear vision of what each type of material looks like. You also need to be able to communicate clearly what revisions need to be made. “It needs to look more metallic” is not enough.  Maybe it needs to be less reflective, maybe the reflections need to be blurred more, maybe the color of the reflections needs to be influenced more strongly by the color of the metal.

IN CONCLUSION

Getting an object to look “photorealistic” in a render is primarily dependent upon two things: an accurate model and realistic materials. Building shader networks is an art which requires a lot of very technical knowledge. Artists build entire careers specializing in the creation of accurate shader networks. A highly-skilled shader artist is one of the key components in any 3D project that is true to the director’s artistic vision.

Success, your comment is awaiting moderation.