Tool Development

Maya Voxelization Node(Command)/Python api

General interest in voxels has increased in the past two years in game and film industry because of its aesthetic look-and-feel and voxel rendering techniques. Major game studios are looking into voxels for realistic terrain generation and fully deformable surfaces, while major Hollywood special effects companies are exploring the use of voxels to realistically render hair, fur and grass.

The tool developed is mainly inspired by the technical talk about crowds rendering given by Blue Sky Studios. All the crowds models consists of voxels and it could make level of detail (LOD) system easy and fast to approach by adjusting voxel width and gap.

  I have implemented the features below:

  • generate voxels based on input mesh
  • if original mesh has texture input, vertex color will automatically created for voxel geometry
  • if original mesh has skin cluster, a new skin cluster for vexel geometry will be created. All the weight data and blend data will be calculated for voxels.
Voxel Width 0.1
Voxel Width 1  

I will be still working on this project to increase the voxel generation speed. Although I utilized special algorithm for voxels position search and efficient mesh generation method, it went super slow when applied small voxel width for example width equals to 0.1. For next step, theadsafe method MMeshIntersector class which supports an octree algorithm to find the closest point could be used instead of MFnMesh::getClosetPoint to achieve better generation reults.

All my code could be found at: https://github.com/tenghaowang/Python_Deformer/blob/master/voxelizerCmd.py


Maya Pose Library/Python

Problem Solving:

Discussed with team members, I decided to developed the tool for animator to record pose and re-target pose or animation between different rigs.Re-targeted animation could be just a good start for animator to work on and save lots of time for building from zero. Furthermore, it would be used for building our crowd  by blending different animation clips. Now I have implemented the features below:

  • Works on anything object at any frame which means no specific rig layout or special naming required.
  • Allows animator to organize poses by characters and categories.
  • Allows animator to rename, delete, import and export any pose from any character or category.
  • Allows animator to remove, add or replace specific control objects in a pose.
  • All the pose are stored as structured XML data in the hidden node of the scene.

Problem Statement:

When I worked in "Shattered" and "Racing" animation project as technical director and rigger, I found it could be really helpful for animator to record character pose. Thus I decided to develop the tool which could:

  • Used for animator to deal with several joints at same time to create convincing emotions.
  • Recorded expressions could be reused easily by having pose library tool
  • Used for re-targeting animation between different rigs and even mirror the pose within the same rigs.

In the following semester, I will keep working on the tool to implement more features including:

  • Add corresponding icon to the pose and rearrange the icons freely.
  • Blend different pose based on the input weight.
  • Re-target animation between different rigs.
  • Mirror the pose within the same rigs.

All my code could be found at :https://github.com/tenghaowang/Maya-PoseSaver/blob/master/PoseSaver.py


PBR Texture Conversion Tool

Problem Solving:

After a series discussion, we applied specular workflow to PBR system. This is because diffusion and reflectance are set directly with two explicit inputs, which is preferable to artists who have experience working with traditional shaders. Moreover, specular workflow provided more control over reflectivity for insulators with a full color input.

The tool was designed in Substance Designer 5.01 as below: the PBR conversion graph consists of two part: texture conversion block and material editor block.

Problem Statement:

As current company project goes, project team decided to redefine the art content standardization. Physically based rendering (PBR) refers to the concept of using realistic  shading/lighting models along with measured surface values to accurately represent real-world materials. While some of the content were already created for traditional shaders, it will be time consuming for artists to regenerate those ones. A conversion tool is needed to make content created for traditional shaders converted to PBR shaders.

For gloss map and specular map, split the specular content and move all of the surface variation from old specular map into the newly created gloss map and updated the base values to represent the microsurface structure of each material. With most of the texture variation moved from the spec map to the gloss, the graph will identify what is and is not metal based on the specular map and diffuse map although we are not using metalness workflow. This is because insulators tend to have uncolored reflectance values around 4% linear (54,54,54,sRGB), while pure metals have much higher reflectance values, generally in the 70-100% range.

Tranditional Content vs Specular Content

The main graph take the transitional content (diffuse map, specular map and normal map) as input and output PBR standard content automatically. The graph also exposed some parameters for artists to adjust for achieving better results.

For albedo map, the main idea is to remove all of the baked lighting and gradient content from diffuse map.The AO and cavity content will be moved to separate textures (optional) and brighten the diffuse map to a more reasonable value.

If there is no glossiness map input, although the tool could determine the metallic area based on the difference of lighting information between diffuse map and specular map, it will not always be reliable. 

The reel below demonstrates how the tool works in the Substance Designer.  One thing is important to note, some of old content has reasonable glossiness map, the conversion process worked very well. 

Moreover, the tool was designed based on specular workflow and it works for our company needs. Ideally, you should create content for the target rendering system and only rely on this techniques if you are updating old contents or switching to different system (metal workflow).


Global illumination (GI) blender/Unity script

Problem Solving:

The Unity tool allows artists to create Light Probe Group based on the area defined by selection and subdivide the space along world axis X, Z and manually set the height for each layer. Thus the probes are positioned in a regular 3D grid pattern. This setup is simple and effective, although it is likely to consume a lot of memory but it could be a good start.

After baked the scene, all the lighting data is stored in each probes. The GI blender tool is used to read the 27 Spherical Harmonics(SH) coefficients from baked probes and export them as XML file. Then blend the coefficients from different XML file and modify the SH coefficients of light probes object to get satisfied GI color. 

The reel below shows how the tool works in the Unity3D 5.01:

Problem Statement:

Although light mapping adds greatly to the realism of a scene, it has the disadvantage that non-static objects are less realistically rendered and can look disconnected as result. It is not possible to calculate lightmapping for moving objects in real time but it is possible to get a similar effect using light probes. Once light probe group is created, the baked data includes probe positions, SH coefficients and the tetrahedral tessellation is stored in each light probes which does not allow artists to modify the color information directly. A unity tool is needed to create Light Probe Group fast and easy to modify the pre-baked data or modify it at runtime.

All my code could be found at : https://github.com/tenghaowang/Unity_GIBlender/tree/master/LightProbes


3ds Max FBX Expoter/Maxscript

Problem Solving:

The custom scene exporter allows users to select objects by using scene selection sets, by using name space, or using export preset created by themselves. In 3ds Max, namespace is useful for differentiate character rigs. While they are not necessary when moved to game engine because the consistent naming convention could help animation retargeting between rigs. Thus I implemented the features below:

  • Export data categorization and organization based on scene selection sets and asset namespace
  • User defined export preset which could record frequently used export path, file name or selected objects.
  • Automatically remove the namespace of selection for export and recover them to keep the working file original.
  • Fast System Unit Setup

Based on the features, I did some research on techniques that I would be using:

  • Structured XML data for storing preset information. (Storing XML data as string in custom attribute)
  • .net framework for data management in MAXScript. (Use Hashtable to manage preset data and various namespace )
  • Use  callback mechanism for communication between scripts and notification events supported by 3ds Max.  

See below for custom FBX exporter dialogue and custom attributes for scene data storage:

Problem Statement:

Good working practice often means keeping a working file with all lights, guides, control rigs etc. but only export the data needed with export selected. Considered frequently use of  FBX exporter in 3ds Max, an custom scene exporter was developed for artists to export data they need and remove unwanted data easily.