DevLog: I Must Confess...
Quick reminder: You can follow development more closely on the Trello board.
TLDR;
- Tried using motion capture (Body, hands, and facial expressions) and disliked the results
- Built Pose Libraries to increase scene output
โ Last (Two) Week's Progress (and my confession)
I want to explain the last two weeks: where I was with progress, what paused the direct scene progress, what I worked on, what I've learned, and where I am heading.
I've been hard at work, not with direct scene progress, but with my pipeline and workflow; the order in which, and the way that I do things to get stuff accomplished, ie: scenes. I'll admit that this is growing pains, but if you know me: growing pains are what I live for. Once I figure out a workflow, I'm fast as hell — but sometimes it takes a long time to reach that speed.
While working on scene 8 (check out the previous DevLog) where I've got five characters, it started out with the environment, the characters, cameras, lighting, props, etc. I've got good workflows for each one of those things, and I'm getting faster. But after starting to pose the characters within the scenes, I felt extremely bogged down... I couldn't figure out what it was. I was going extremely slow while posing and putting the story together in a coherent visual way. It felt as if the POSING got in the way of progressing in a timely manner — I felt as if I needed a speedier way of getting these scenes done. And if you know me, you know that I don't like to listen to advice until I learn it the hard way, on my own terms (it's a bad trait of mine ๐).
After spending far too long on pose after pose, I felt as though there might be a faster way, I hoped I could figure out a way to save myself time (and everyone else's!) and just... SNAP a pose really quickly. As my ADHD brain did when frustrated, it turned to outside of the box solutions that might get the job done. I thought to myself, "What if I could just... stand up, do a pose as the character, and 'BOOM', I'd have the pose finished?" I thought that motion capture could be a good solution; instead of spending many minutes posing a character, I just stand up, do the pose as I'd like it to appear in Blender, and it'd be done! Wouldn't that be wonderful?!
My Confession
Unfortunately, no. That was not the case — but again, sometimes I have to hit my head against the brick wall myself to realize that there's a wall. I spent a few days rejuvenating and bringing some old code back to life in order to test this idea (cgtinker/BlendArMocap), and the results are... fine. They can vaguely track the pose of a character (remember that the MOTION wouldn't be important), but it wasn't enough fidelity that I'd been searching for. Sidenote: I also briefly priced out the cost of a motion-capture suit and let me just tell you: that is not an option ๐ซจ In this little detour into "mocap-land", there's a little place called, "facial expression motion capture", that I played around in. This used the iOS ARKit, the same technology that brings you iPhone's memoji and similar filters as Snapchat. It's pretty nice because you can just map it to your existing character's face rig — so it was a quick setup. However, the results are just not up to par, in my opinion. I think they could be really fun as a quick gag while streaming, but for posing characters that are required to look and feel as natural as possible it just wasn't for me. Even though it wasn't absolutely a waste, I spent more time on motion capture than I wanted to. I am glad that I was able to see the writing on the wall that this wasn't going to work and ended it right when I did — silly people go down rabbit holes, only to never be seen again.
So where does that leave me?!
Okay, I hear you: "enough with the failures for the past two weeks, please tell me something good!" ABSOLUTELY. There's a reason I didn't post a DevLog last week, that week was all disappointments and no progress. I'm excited to tell you that not all hope was lost. The amazing dev (Buff Game) of Demon Seed pulled me out from the deps of my despair and suggested a pose library. "DUH" nuii pointer thought to himself, as he sighed woefully. Like I told you, sometimes I have to hit the wall in order to learn. For the past week or so, I've been creating a workflow and a giant pose library full of hand poses, finger poses, body poses, and facial expression poses. I am absolutely sure that I alluded to doing a pose library in the past, however, I ran into a few bumps along the way that prevent it from going much further. One example is an ongoing Blender bug, that caused existing poses to not transfer successfully to other characters (ie. from Willow to Anya); thankfully, I found a pretty good workaround until they have this fixed.
So, to wrap it all up: I'm reaching 100 re-useable facial expressions and have 600+ re-useable body poses. And remember that this library will only continually grow and improve! With this pose library I'm confident that this will remove my "bogged down" feeling that I've been having when spending a lot of time manually posing so many characters and feeling like I'm wasting everyone's time (trust me, I want to go blazing fast and get these updates out ASAP!).
If I remember, I'll let everyone know how one week of using the pose library has been and how much time it's saved.
๐ ๏ธ Current Development
What I'm working on right now:
- Polishing and cataloguing the pose library - this way it's easier to find something quickly
- Scene 8 (I feel guilty that I'm still on this scene ๐ช)
โ๏ธ Technical Corner
I wrote a few Python/Blender scripts during the last two weeks.
Copy Rig Rotation Modes
This script was to handle the Blender bug that I mentioned up earlier. The issue is that, due to Diffeomorphic and how it brings in and creates rigs, some bones have different rotation modes — either Euler or Quaternion. The rotation modes themselves aren't the problems but it's a problem with how Blender's pose system stores the data. Some of the bones might be stored with Euler, while others might be Quaternion. However, Diffeomorphic (or Rigify, I couldn't determine which was the issue) attempts to determine the best rotation mode based on the bone, but it isn't consistent. For example, Willow's neck bone could be Quaternion, while Anya's neck would be Euler; this would be a problem because it does NOT correspond 1:1, and while this bug isn't resolved, Blender is not able to handle the conversion. In this example, when you save a pose from Willow's armature to the pose library, and you try to load that same pose on Anya, the pose will be applied to everywhere BUT the neck (because the rotation modes DO NOT MATCH!). My solution was to write a script where I have one main character rig, Ambrose (the MC), and the script just copies it over to all of the other characters. This way it's guaranteed that the character rigs' rotation modes match, and I'm able to re-use poses across all characters now! ๐ฅฐ
Copy Rig Rotation Modes (advanced)
This script is the exact same as the prior script, but it also adjusts characters' keyframes. It's a result of a problem I had where I had already created keyframes with "bad rotation mode" data, this corrects all of those bad keyframes. Saved me many days worth of work.
Generate Face Expressions
I had to write a Python script that took the selected character's face rig, and it essentially creates a saved/reuseable face expression. These are re-useable on ANY of our amazing characters ๐ป Which is great, because I can go back through previous scenes, and just take ANY of those facial expressions and create a pose asset for future scenes.
Mass Import DAZ Poses
All characters originated from DAZ3D Studio but now live in Blender. While DAZ has a LOT of pose packs, it's difficult (time consuming) to import them into Blender. I probably accomplished importing 50 poses in about an hour — that's less than one pose per minute! I finally took 20 minutes to nail this script down, and I'm able to import roughly 100 poses per minute.
Needless to say: scripting and automation can save you hours or days worth of time!
๐ Sneak Peek (TRIGGER WARNING: These are bad! ๐คฃ)
If it wasn't clear by the above text: no, motion capture will not be used — not for still renders, and not for animation.
Needless to say: MoCap testing wasn't successful - but it was fun to experiment with!

Face MoCap - less of a failure ๐คทโโ๏ธ

I'll be opting for a pose library instead


๐ฌ Community Corner
I want to express my gratitude to everyone for supporting me, even if it's just with a thumbs up, even if it's with reading these.
I apologize for my absent DevLog for last week; it's a rare occurrence, but in this case, I felt like it might be better to provide the whole story and the bigger picture in one fell swoop instead of two half-assed DevLogs.
See you in the next one! ™๏ธ
Don't forget to check out the Trello board for detailed progress updates! โค๏ธ
Get Echoes of The Cataclysm
Echoes of The Cataclysm
The Cataclysm created them. The technology will destroy them. Only a few will define their fate.
Status | In development |
Author | nuii pointer |
Genre | Visual Novel |
Tags | 3D, Adult, avn, Erotic, Fantasy, No AI, Sci-fi, Story Rich |
Languages | English |
More posts
- DevLog: Asset Creation!14 days ago
- DevLog: Scene 8 Development Has Begun!!!21 days ago
- DevLog: Scene 7 Nears Completion (Animation Marathon Complete) + Scene 8 Develop...28 days ago
- DevLog: ๐ Raven Wallpapers! + Scene 7 Updates35 days ago
- DevLog: Scene 7 Intro Cinematic Done ๐ Sneak Peek of Raven Inside ๐42 days ago
- DevLog: โ๏ธ Foundation Tiers Open! Scene 7 Cinematic in Production56 days ago
- Public Release [0.1.11] Chapter 1 (Part 1 of 3) Patch 172 days ago
- Public Release [0.1.1] Chapter 1 (Part 1 of 3)Jun 27, 2025