In this second post in the series, we’ll focus on getting our animation out of Blender and into Codea.
Animating a model in Blender
How to create a Blender animation is beyond the scope of these posts, but as Blender is a free and multi-platform application, there are hundreds of tutorials out there. Here are a few that I found particularly helpful:
- this step-by-step overview to creating a walk-cycle shows how quickly you can work in the animation editor if you take advantage of tools like mirror paste. Each step has a screen-shot, so it’s pretty hard to go wrong.
- if you prefer watching videos of someone working, and want a much more in-depth break down, have a look at Darrin Lile’s videos. There’s hours and hours of material there, from sculpting, to texturing, rigging, animating. It’s pretty much a complete course.
There are also thousands of models available online. Blend swap is one of my favourite places to go for inspiration, although as with any resource, you need to check the licensing requirements if you want to use them in your code. A lot of the models, including the C90 Robot (courtesy of Awesome Studios) and Low Poly Character (Teh Joran) I’m using in these videos, are CreativeCommons licences.
These resources are great for learning and inspiration, but I think it is worth taking the time to create your own models though, as finding models that are good quality, and have a low polygon count (“low-poly”), can be a challenge, and creating your own really opens up the possibilities of what you can create.
We are going to be exporting a set of keyframes. The following is a suggested “manifest” for the kind of animations we might expect to see in a basic game character:
frame 1. Neutral standing position
frames 2-5. 4-frame walk-cycle
frame 6-9. 4-frame jump (somersault or whatever) animation
Getting our animation out of Blender
As discussed in the previous post, we are implementing our 3D animation using the simplest (but least flexible) method, keyframe interpolation. This means that we don’t need to worry about getting skeleton data, vertex weights and so on out of Blender, as all of that information, about how much the mesh will deform at a given point and so on, will be “baked” into our keyframes. We only need geometry and texture data, so we can export with the Waveform .obj exporter, rather than one of the fancier formats that also exports skeleton animation data, like Collada .dae, .fbx, or DirectX.
The Blender .obj exporter has next to no support for animation, but it has just enough for our keyframing needs. Here’s a step-by-step guide to exporting .obj for animation.
Preparing the model for export
First of all, you need to make sure that all of the faces on your model are triangular, and not any other shape. With static models, this is not such a worry, as the .obj exporter has a triangulate faces option anyway. But, with animations, we cannot use this option. Why? Because as our model deforms throughout its animation cycle, the exporter triangulates the faces in a different pattern for each frame, meaning that the vertices for each frame do not match up, and interpolating won’t work. Instead, add a triangulate modifier to the mesh. Normally, we wouldn’t have to actually apply the modifier, because the exporter has an apply modifiers option. Again though (I need to test this a bit more), it seems that this way, the triangulation does not necessarily occur the exact same way for each frame. So we do need to click apply on the triangulate modifier, before we export.
This is also a good chance to think about model scale. Although we can use
scale to draw at any size once we get our model into Codea, it’s helpful if we are in the right area when we export. Hit N to bring up the properties panel, where you can see the model’s dimensions. The .obj exporter is unit-agnostic. It doesn’t matter whether you choose Blender-units, imperial, or metric, the number you see in the object’s dimensions panel will translate to pixels in Codea/ OpenGL. You can either scale the whole model to the right size now, or use the scale modifier in the export window.
With the faces triangulated and the scale decided upon, we’re ready to export. First, make sure that you select only the geometry that you actually want to be visible in the game, and not any of the inverse kinematics armatures used for posing the model (if you’ve been working in “pose” mode, you’ll need to select an area of the mesh that’s not near a bone to get into “object” mode). Otherwise, the armatures will be exported as part of the model geometry (but without any of their corresponding bone manipulating functions).
With the model’s geometry selected, hit File > Export > Waveform .obj. These are my settings:
- “Selection Only”: so that you don’t get vestigial armatures showing up in your model
- “Animation”: this is optional. You can either move the frame counter to each frame you want to export and do a manual export for each one, or you can do one export for the entire animation by checking the “Animation” box. This will create an .obj and an .mtl file for every frame in the sequence (not just every key frame), so you’re going to end up trashing most of the files you create, but it’s still quicker than exporting frames manually. Hint: you can cut down the number of frames in your animation by scaling the keyframes in the dopesheet. Move the framecounter to the start of your animation (it will act as an anchor for the transform), hit S for scale, and move the mouse until you have a smaller number of frames. You can then set the “start” and “end” points of the animation loop at the very bottom of the Blender window. I used 28 frames for my walk cycle, 24 of which end up in the trash.
Trash the unwanted frames
- “Triangulate Faces”: as described above, make sure that this is not selected
“Keep vertex order”: you need to select this in order to make sure that the vertices are consistent from frame to frame. e.g. the centre of the character’s left eye remains point number 23,765 across every frame.
After you’ve exported, you can check that the vertice order is the same across all of the frames, by loading two of the .obj files into a text editor with a “diff” function, such as TextWrangler (for Mac users), and comparing the two files. You should only see differences in the vertice positions (the lines beginning with “v”). There should be no differences in the composition of the faces (lines beginning with “f”).
What you don’t want to see: a difference in the face composition
If you selected the “animation” checkbox in the .obj exporter, trash the frames you don’t need. You should now have a set of .mtl (material) files, and a set of .obj (object) files for each frame that you are going to import. As the materials don’t change frame-to-frame, we also only need one .mtl file, so you can trash all but the first .mtl file. We are now almost ready for import.
Preparing for importing into Codea
We are going to be using a modified version of Ignatz’s .obj importer. If you are not familiar with this, the blog post explaining how to use it, and linking to the source code, is here. My fork of Ignatz’s code is in the link at the top of the next section.
Much of the procedure of preparing the files for import in my fork are the same as Ignatz’s original:
- First of all you have to upload the .obj, .mtl, and any image files you are using somewhere where they can be accessed and downloaded in their raw state. For the .obj and .mtl files, code hosting sites such as GitHub work very well. I have a GitHub models repository set up on my computer, so it’s just a matter of dragging-and-dropping the models I want to import into the local repository, and hitting “sync” in the client. For the images a blog account, image bucket, or any site that allows you to link directly to the jpg itself.
If you are using a texture, you need to add the url of that texture to the .mtl file. Edit the .mtl file in a text editor so that the line beginning with “map” that contains the texture name now also contains the URL of the jpg, with this syntax:
map imageName.jpg imageURL
There are however quite a few changes I’ve made compared to Ignatz’s version. First of all, when using Ignatz’s importer, we concatenate the .mtl and the .obj file into a single file. This makes a lot of sense for static objects, but as the workflow for exporting .obj animations is a decidedly multi-file process, we are going to keep each of our .mtl and .obj files as separate files. This means that if you’re not using a texture, you don’t need to make any changes to the files at all, and you can just upload them right away. This is also useful when you want to tweak a few of the keyframes without having to re-upload everything.
The animation importer
Because multiple files are being used to create a single mesh, the process of loading them all into Codea is a bit more involved than if you’re just loading a single file. What I have done to handle the IO is create a
Rig class. This waits until the .mtl file has processed before it begins processing the .obj files. I have also split Ignatz’s parser up into an .obj class and a .mtl class to handle the respective file types. The
rig class uses the first .obj file to create a mesh, and then loads the subsequent .obj files into additional attributes in that same mesh, ready for animation.
One final change I made to the .obj importer: Ignatz’s original importer splits the model up so that each material is on a different mesh. This was because whereas each Codea mesh only supports one texture (by default, although you can add more textures to a custom shader), Blender models can have a different texture on each mesh. What I’ve done is put all materials in the model onto a single mesh, by setting the colour of each vertex according to whatever the (diffuse) colour of the material is. This does mean that at present the importer only supports models with a single image texture. The reason I wanted to have the entire model on a single mesh was largely for the sake of convenience, and speed. To have fast running code, you need to cut down the number of
draw calls that you make (as far as possible, you should try to have multiple objects on one mesh). Also, when I was looking around for models to animate with, I saw a number, such as the C90 Robot I’m using, that have lots of materials (a different one for all the colours on the body) but no image textures. Perhaps this is a simplification too far and I should in future reintroduce the multi-mesh system, but code it so that a new mesh is only started if the texture image changes (but not just if the material changes).
As this post is just concerned with file I/O, I haven’t said anything about the code for animation, the shader, and the interpolation spline. That will come in the next post.