Retarget - floor offset
When using the retarget function on two Mixamo skeletons with different height, the retargeted skeleton has a floor offset. Example scene: https://www.dropbox.com/s/z1q106w7aio659q/retarget%20problem%201.c4d?dl=0
The Motion Source Character of the Character Solver has a motion clip applied as I need to sequence multiple motion clips and added a T-pose reference at the beginning. Not sure if this is causing the issue...
Edit: I've also tried to use the old Retarget Tag. When using this Tag, the Hip position/rotation of the two characters becomes identical. Which also causes floor offsets as the legs of one character are longer.
Dr. Sassi last edited by Dr. Sassi
Thanks for the file.
What you need is a Pivot Object. Each Motion clip in the Motion System timeline can have a Pivot. You can call on up from the Animation Menu. I avoid using other options to create one.
Select the Motion Clip, and in Attribute Manager> Advanced, you will find a field that takes this Pivot object.
Here is your example, roughly adjusted for demo purposes.
The idle pose needs to be interpreted about what kind of preparation one does. Which foot stays, which is moved to change the center of gravity to start the run?
A more detailed exploration of that subject is in the book:
Performing for Motion Capture: A Guide for Practitioners
About Pivot objects:
All the best
The floor offset happens to the retargeted character, not to the original motion clip sequence.
Dr. Sassi last edited by
The Retarget Character definition does not match the position of the receiving character; it has in its default only one part set to take the position, which is the Root Object. I know some people suggest placing the Motion tag on the first Joint; I advise having a parent null on each joint hierarchy and placing it there.
You have not explained why you put extra steps in the setup. The Motion System can work directly on the final character, and all the extra processing with the Character Definition can be avoided.
Furthermore, I would set all Mixamo MoCap data into Animation clips. Which can then be added, shared, and reused on one single Joint hierarchy. You can create even copies and adjust these clips on a keyframe level if they need to go up or down based on if they wear flat shoes, no shoes, or high heels.
The key is to have a scene as simple as possible, not many rigs in the same file.
Mocap data and its mixing require adjustment and sometimes fixing. Adjusting the P.Y of a character can be done with a Pivot object or lock the time for the P.Y keys and then move them all up a little bit in the F-Curve. Those are fast adjustments and something I would consider definitive work.
When I do my MoCap sessions, I calculate per minute of capture and an hour of cleaning up the data. While more extended sessions, based on slipping, might double that number.
The time needed for Mixamo is nearly ignorable in comparison. If you add a parent null, turn it into a Motion clip, adjust it, and have each wisely named, you have in no time an excellent library.
All the best
I wish it could be simpler, but it doesn't work. Eg. using motion clips from different Mixamo skeleton proportions is causing problems. That's why I've retarget them. But the retargeting itself seems not working accurate either. Please find a very simple example here: https://www.dropbox.com/scl/fi/fekm9inecm7dszrnnquds/retarget-offset-problem-2.c4d?dl=0&rlkey=dqkgzhjz4qya5wrt2cawi45j3
There are a lot of foot offsets and sliding, especially when knees are bending. I am wondering whether I did something wrong or this is the expected retargeting tag result. From my impression, this glitches cannot be fixed by just moving a pivot vertically. It basically would need additional cleanup by hand, frame by frame..
Dr. Sassi last edited by Dr. Sassi
I agree it is not that simple, as I have mentioned above: "Mocap data and its mixing require adjustment and sometimes fixing."
Book tip, even super old: MoCap for Artists ('08) by Kitagawa, Midori - Windsor, Brian
(I ignore the frame 0-8 as it is a transition from T-Pose to Capture)
The problem might not be solvable by just applying the transform. Let me explain: The two motion captures are done with two different size dancers. The input (Character Definition1, rename them, hence a one added to the name) has a different size for pretty much any joint. This is why rotation is the only transfer option (except for the root, which contains the position animation). So longer or shorter joints will create different results in the overall length of the leg, for example.
Now one would say let's do the total transfer to the rig, including the positions, which would also set the lengths of each joint. So you have a rigged (Character Definition2) Character and apply precisely this. What will happen is that your character-object shrink and typically leaves you with many crumbled polygons, perhaps. Certainly not what you like to have. So not a solution.
In your case, the source sometimes sticks the toes into the floor, around frame 26, for example, just to point out that Mocap data always needs fixing. That you use the floor as an example leads me to the assumption that the character will get lifted later on, as the toe joints need to sit in the middle of the toe. So for anyone reading along (forum), toe joints are not meant to be on floor level when standing.
The sliding is explained by the character's size, here, the legs, of course. Going that the legs rotate means that the longer they get, the more distance they describe naturally. Since the position animation stays the same, that will not match, so the animation path needs to be scaled accordingly. Also, not something that works out with one adjustment or "clicks".
It gets even worse; some MoCap data has two, three, or four joints as a spine. I have asked many people how to get matching results in the past years. I got no "one size fits all" reply in summary. Image taken from your file, see the difference?
Going from my first work with "Bones" back in the mid-'90s, with all our options, it is a comparatively simple task, but yes, it hasn't lost the requirement of some (more or less deep) experience.
Character animation based on MoCap requires fixing it in Post after the capture. Even $100K+ Capture Volumes lead to that work. So, to expect to collect data from different sources, models, and methods and mix them without a problem is not my expectation. I'm also not sure if I would like that give-up control, as MoCap is a recording of expression that needs direction, not a technical conversion.
So, there might be room for improvement, but with everything we use for expressions to tell a story, there is art direction and skills needed.
However, I understand that you see it differently, which is the seed for pushing it forward. Please voice your wishes here ("Share your ideas!"
Thanks for doing that!
All the best