Thanks, I did indeed catch Sauls video which led me to the string user data. It looks like this would be too complex for my scripting skills at this point. I had thought about your suggestion to do a single flat texture - but the objects are all individually animated clones so I dismissed that. But thinking about it some more I realize I could make it a single texture with all the symbols in a row, and offset the UV mapping per object. So thanks!
Latest posts made by Bar3nd
-
RE: Switch texture in Redshift material based on object name
-
Switch texture in Redshift material based on object name
Hi,
So I have a bunch of identical objects (think typewriter keys). They all need the same material - but with a different graphic on them. I have all the graphics as separate files.
I could duplicate the materials and swap out the textures but then each adjustment to the material would have to be managed across all the copies.
What would be nicer of course is the ability to use the object name to select the corresponding texture. For instance: object name is "1", so the texture "1.png" is loaded.
I imagine something like this might be possible with 'string' user data or something, but I don't really know how to start.
Cheers,
Barend
-
RE: Use simulated mesh to deform another mesh?
A dirty fix is to store the edge selection of the rings of the cilinder, and use a cloner object to populate the edges with cilinders... But it's not very elegant.
-
Use simulated mesh to deform another mesh?
Hi,
I'm using softbody dynamics on a cilinder, but what I'd want to render in the end (in Redshift) is rings/tubes that follow the squashing and stretching of the cilinder. As if the rings making up the cilinder are put into a loft object.
I can sort of do it by putting the rings with the cilinder into a connect object and then simulate (and then putting a transparent material onto the cilinder, but the rings interfere with the simulation - and ideally the crossection of the rings would remain constant - so even though the rings would bend and move closer together and further apart depending on the squashing and stretching of the 'main cilinder', the rings would keep their 'crossection' constant.
Hope this makes sense.
I actualy tinkered with scene nodes for a moment to see if I can procedurally create lines from the edges of the simulated object but the simulation doesn't seem to be taken into account.
I guess being able to use the simulated shape as an FFD cage for my rings could be a solution but not sure how to achieve that.
Any suggestions?
-
RE: "Best" rigging approach for working with mocap data?
Hi, thanks,
I understand it won't be plug-and-play and cleanup etc. will be needed. I'm reasonably comfortable with the animation and keyframing tools - but rigging characters and dealing with mocap is new. And of course my requirements don't really fit 'the usual' so all the quick solutions discussed on Youtube don't apply.
For now what I did was 'just' copy the animation track from the hips of the mocap to a null, and place my character in it. An odd extra step to have to take (I would have expected the motion of the hips to be automatically driven - using the Mixamo Character Object template with Mixamo mocap data...) but now it produces exactly the result I was expecting.
-
RE: "Best" rigging approach for working with mocap data?
Hi Sassi,
The actual mocap sessions are a couple of months away - so I'm using this time to figure out parameters and possible pitfalls. I'm stumbling through a little here - watching lots of tutorials, but most use autorigging on Mixamo which I'm not using.
I found some mocap data off the Rokoko website that has a T-pose on the first frame - which helps. Interestingly the Rokoko data also has the Mixamo hierarchy. When I target the mocap data to my character it transfers 'fine' - except on my character the hips stay static / locked, but the rest of the character moves correct relative to the hips. The character was rigged using the Character object with the Mixamo Control preset so I was sort of expecting it to work.
Suggestions on where to look for the solution?
Cheers,
Barend
-
RE: "Best" rigging approach for working with mocap data?
Hi Sassi,
I figured I'd start learning to rig the skeleton and figuring out how to connect mocap data.
I saw a template in the Character Object for Mixamo rig - so I gave that a shot - assuming it would help with linking Mixamo mocap to my skeleton. The rigging process was rather painless and works pretty well already. Indeed vertebrae end up grouped along a couple of 'bones' but that should be fine.
Importing a Mixamo mocap clip (just the animation) I run into the next challenge: my character is rigged in T-pose but the animation starts in a different pose. After applying the mixamo animation to my character it's moving, but only relative to it's T-pose... I'm sure I'm missing something here - I'm reading a lot about some mocap systems always putting a T-pose on the first frame to tackle this. But I'm assuming there are other ways.
It's been a while since I used motionclips so those tutorials are a great resource for the next phase!
-
"Best" rigging approach for working with mocap data?
Hi,
For a project I'm going to work with a lot of mocap data, for human-proportioned creatures.
This is entirely new to me. I'm looking at some tutorials using uploading to Mixamo for auto-rigging, but autorigging fails on my test model. The creatures will be translucent with visible bone skeleton so my starting point was a skeleton and I suspect Mixamo doesn't like it.
So I'm probably going to manually rig a character in C4D. Here I'd like some advice. There is for instance a fully rigged human skeleton in the Content Browser, but it uses fairly complex rigging beyond a 'simple' bones rig, making it less than obvious on first attempt how to target the mocap data
So would I be better off building a 'classic' rig with bones and weightpainting, or is there a good way to link the C4D Character Object to mocap data (and would this have benefits?).
We'll probably use an Xsense setup for mocap. I've seen some videos that make retargeting the Xsense data to a standard Mixamo rig look quite straightforward. So I'm thinking to use the standard Mixamo bone structure and rig my character onto that.
Cheers,
Barend
-
RE: Using a redshift shader to control Shader Effector in mograph
Here's an impression of the direction I'm heading in.
-
RE: Using a redshift shader to control Shader Effector in mograph
Hi Sassi,
I completely understand.The thing is that in the end it will be a lot of very complex scenery with fairly complex geometry (think trees, cathedrals). So I'm currently working on developing a pipeline that allows me to bring in the models/geometry, and develop the final look in a fairly procedural way. And ideally in a single multipass render. This is why I'm hoping to avoid baking and vertex maps.
But I've figured out how to keep the base geometry invisible while still sending the curvature pass to a custom AOV - which allows me to use the curvature output (which works better for my purpose than the AO pass) to tweak the look in the composite. Not in particle distribution per se, but in luminance. Which seems like a workable compromise.