3D modelling Task 4

3D Modelling: Exporting the Model

    Baking

I am now at the stage where my 3D model is ready to export. I am exporting it to theoretically become part of a bigger game engine. One of the techniques that is part of exporting a model is through the ‘baking’ of textures through cycles. ‘Baking’ is the term used to pre-render textures by changing them into image textures that can be re-applied to your 3d models without the added data of lighting. Using ‘cycle baking’ can help during game development because it gives you pre-rendered cycles, instead of having to wait for the render of every image which could be very time consuming. It saves the CPU or FPU workload. It can give the game designer an extra level of detail that would not be possible within a game environment.  It enables us to project all the layers from a high resolution onto a low resolution or low poly game model.

However, when you’re object is in a scene and part of a world, any lighting or physics applied to the object would not show the original texture correctly when importing it back into software for adjustments, the texture would have to be UV unwrapped. This is fine for simple objects but can be problematic when dealing with more complex objects.

 

 

Here is one of the objects I used to ‘bake out’. It is a simple cube using the blender software. I applied a coloured cloud texture with displacement so i could bake out a ‘normal’ map. I UV unwrapped the object before baking it out.

(1) Screen Shot 2014-05-15 at 14.01.55      (2)   Screen Shot 2014-05-15 at 14.01.59   (3)     Screen Shot 2014-05-15 at 14.03.46

 

Figure 3 shows the settings within blender I used to bake out each channel. Each ‘bake’ renders each channel out as a high quality image. Once all your settings are in place to the standard you want them you click on ‘bake mode’ and it gives you a drop down list of which channels you want to bake out.

 

Normals

AO_CUBE

Normals are there to tell the software (in this case blender) in which direction the light should reflect of each face of the surface your object. The more polygons you have the more detail you have and the more normals you have. It stores the normals in RGB colours, blue is the depth, red and green defines the direction for example left to right and back and forth.

A.O Ambient Occlusion

Screen Shot 2014-05-15 at 14.04.41

Ambient Occlusion is where it shows the software the non-directional object shading that’s the result of background ‘ambient’ (bounced) light within a scene. It is partially based on the physical characteristic of an object and how its actual structure blocks or ‘occludes’ light, rather than the influence of directional light. In this case this object has none so you do not see shading or a difference within the whole project.

Shadows

 

SHADOWS_CUBE

 

 

 

Texture

 

TEXT_CUBE

I added a default texture that is built in from blender, and it is a procedural texture, but because I UV unwrapped the object to begin with it will bake out as an image texture. This texture is a combination off all the bakes I have previously exported. I went into edit mode, selected all, created a new texture map called ‘texture’ just like I created a new map for each of my bakes, I selected all and selected smart uv unwrap to create the UV of all the combined maps. You also select a new image for texture to be saves as an image file, I labelled mine texture. I selected the texture on the bake options and clicked bake. It saved as the image above.

 

 

Although this is an option it would be best used when sticking with the Blender software throughout, because you can re-import you’re scene so the physics and lighting will remain, compared to taking it into a different software platform. You would have your texture images but values such as the specular and reflectance would change, and you need to pass on the this data when exporting the object. To do this you would export these different ‘effects’ as ‘passes’. These passes separate all the data about the object into layers. Below I have made passes and exported them as separate image layers.

 

Render Passes

This is the compositing mode within Blender where I will be using nodes to create render passes. We use render passes because they are necessary to calculate the different things within the Blender Render engine to give you the final image. Each ‘pass’ gives the the calculations for the different interactions between objects, and seperates the outputs to images.Render Passes are necessary because of the different things the Blender Render Engine must calculate to give you the final image. In each ‘pass’ the engine calculates different interactions between objects. You use ‘nodes’ to create these outputs. Once you have rendered each pass with the various different outputs, you would usually render a final ‘combined’ output  which collates all the single passes you have made. The different outputs consist of:

 

Shadow –  Shadows cast from the object. You must ensure shadows are cast by your lights and received by materials. To use this pass, mix multiply it with the Diffuse pass.

Z – The Z-depth map is how far away each pixel is from the camera. Used for Depth-Of-Field (DOF).

Specular – This shows the specular highlights on the object/s

Diffuse – This is the diffuse shading of the material on your object

Reflectance – Reflection off mirrors and other reflective surfaces. Mix Add this pass to Diffuse to use it.

AO (Ambient Occlusion) – You must ensure it’s turned on in your environment to see any results, and that RayTracing is enabled.

Emittance – This creates an emission pass

Combined – This renders everything in the image. This is all the outputs, blended into a single output, except those options which you’ve selected to be omitted from this pass, which indicated with the camera button.

 

 

Screen Shot 2014-05-15 at 15.06.55                         Screen Shot 2014-05-15 at 14.14.11                         Screen Shot 2014-05-15 at 15.06.50

 

 

 

 

 

 

 

This is an exported and animated version of my air pump model, however it would be unsuitable to use within a game engine:

 

 

 

 

 

The Divisions

[https://www.youtube.com/watch?v=_F-hwyu9bcU]

 

Above is a video from Massive Entertainment, and it is a behind the scenes look at a recently developed game engine that has improved the quality of developing games in the design and modelling stage.  The engine supports a quick variation of features using whats called a nodal procedure, which was the same process I used when creating my air pump. However, this game engine has been designed in such a way that  instead of having to bake each component out and pass it on to someone who has to collate the bakes and add psychics, it can render the models/images/environment in real time so you can adjust your shading, texture, physics, etc there and then. This allows for a much quicker process of design because you can make as many alterations as you want without waiting for it to render each alteration, and you have all the tools to your disposal to be more creative with what you’re trying to make. The components to alter are readily available because of the nodal format, you just have to join them together. The result is a more realistic, filmic  experience of games as well as games being made and distributed quicker.