What? As of 23rd January? But that’s over a month ago! Well, I raced to the deadline for my assignment and I’ve been taking a bit of a break since then (possibly a bit too long of a break but that’s what holidays are for right?).
The rig still isn’t 100% finished but I’m okay with what I’ve done so far. It’s definitely passable and I learned a lot doing it. I’ll likely revisit it later on in the year to finish it off and get it truly showreel ready.
Anyway, here’s how the rig was when I handed it in:
And here’s the auto breathing system working:
Didn’t get the hair done, I’ll say that up front. My trial version of Shave and a Haircut ran out earlier than I’d have liked, so she’s bald. But that’s okay.
Couldn’t get the cornea shader (renderman’s glass material) to work on the school computers, although I’m not sure why. Works fine at home so…. Shrug? It’s just the default glass renderman glass shader with a different IOR.
Also I have some minor scaling issues (which I’ve figured out and will take no time to fix – turns out I forgot to scale the group holding parent space locators, so any controllers with dynamic parenting were scaling in place instead of scaling down with the world).
I was a bit afraid of skinning at first – it’s not something I’ve really spent any time on before this, so it was entirely new territory (and since my face rig is almost entirely joint-based, skinning is kinda important).
Turns out my fears were entirely unfounded though. I used the ngSkinTools plugin and let me tell you, it is phenomenal. You can mirror weights in any pose and because it allows you to paint in layers, it’s a completely non-destructive workflow.
Speaking of the face rig though, I’m not totally satisfied with it. It was (again) my first real attempt at a face rig, so it’s pretty good with that context… but it just doesn’t have the flexibility I’m after. I used a few less joints than I should have (again because I was weary of painting the weights) and I’ve learned a lot from what I’ve done. Still, I think I might do a run of the CGSociety facial rigging workshop.
Also worth noting is this video by James Taylor. He’s apparently gonna do a few more facial rigging videos so keep an eye on that! Unfortunately it came out too late to be useful for this particular rig.
Ultimately I based my joint placement on a combination of some references for facial tracker placement (designed for mocap but I figure it carries over more or less) and just looking at what moves on my face. For the limited amount of joints I added, I think the placement works out fairly well.
I kinda slapped together a breathing system (as shown above), which was fun to do. Bit last minute, so I’m sure it could be better, but it works out fairly well. It’s two blendshapes (stomach and chest), and clavicle movement, slapped through some attributes on the chest ctrl (frequency, strength, and stomach vs chest ratio).
The equation is basically this: sin(2freq x time) x 10strength. Getting a sin function with nodes was the hard bit. I ended up just plugging the timexfreq into rotation of a circle, which has a locator parent constrained to it. Grabbing the translate Y of the locator gives you a sin function.