Thanks, I did indeed catch Sauls video which led me to the string user data. It looks like this would be too complex for my scripting skills at this point. I had thought about your suggestion to do a single flat texture - but the objects are all individually animated clones so I dismissed that. But thinking about it some more I realize I could make it a single texture with all the symbols in a row, and offset the UV mapping per object. So thanks!
Posts made by Bar3nd
-
RE: Switch texture in Redshift material based on object name
-
Switch texture in Redshift material based on object name
Hi,
So I have a bunch of identical objects (think typewriter keys). They all need the same material - but with a different graphic on them. I have all the graphics as separate files.
I could duplicate the materials and swap out the textures but then each adjustment to the material would have to be managed across all the copies.
What would be nicer of course is the ability to use the object name to select the corresponding texture. For instance: object name is "1", so the texture "1.png" is loaded.
I imagine something like this might be possible with 'string' user data or something, but I don't really know how to start.
Cheers,
Barend
-
RE: Use simulated mesh to deform another mesh?
A dirty fix is to store the edge selection of the rings of the cilinder, and use a cloner object to populate the edges with cilinders... But it's not very elegant.
-
Use simulated mesh to deform another mesh?
Hi,
I'm using softbody dynamics on a cilinder, but what I'd want to render in the end (in Redshift) is rings/tubes that follow the squashing and stretching of the cilinder. As if the rings making up the cilinder are put into a loft object.
I can sort of do it by putting the rings with the cilinder into a connect object and then simulate (and then putting a transparent material onto the cilinder, but the rings interfere with the simulation - and ideally the crossection of the rings would remain constant - so even though the rings would bend and move closer together and further apart depending on the squashing and stretching of the 'main cilinder', the rings would keep their 'crossection' constant.
Hope this makes sense.
I actualy tinkered with scene nodes for a moment to see if I can procedurally create lines from the edges of the simulated object but the simulation doesn't seem to be taken into account.
I guess being able to use the simulated shape as an FFD cage for my rings could be a solution but not sure how to achieve that.
Any suggestions?
-
RE: "Best" rigging approach for working with mocap data?
Hi, thanks,
I understand it won't be plug-and-play and cleanup etc. will be needed. I'm reasonably comfortable with the animation and keyframing tools - but rigging characters and dealing with mocap is new. And of course my requirements don't really fit 'the usual' so all the quick solutions discussed on Youtube don't apply.
For now what I did was 'just' copy the animation track from the hips of the mocap to a null, and place my character in it. An odd extra step to have to take (I would have expected the motion of the hips to be automatically driven - using the Mixamo Character Object template with Mixamo mocap data...) but now it produces exactly the result I was expecting.
-
RE: "Best" rigging approach for working with mocap data?
Hi Sassi,
The actual mocap sessions are a couple of months away - so I'm using this time to figure out parameters and possible pitfalls. I'm stumbling through a little here - watching lots of tutorials, but most use autorigging on Mixamo which I'm not using.
I found some mocap data off the Rokoko website that has a T-pose on the first frame - which helps. Interestingly the Rokoko data also has the Mixamo hierarchy. When I target the mocap data to my character it transfers 'fine' - except on my character the hips stay static / locked, but the rest of the character moves correct relative to the hips. The character was rigged using the Character object with the Mixamo Control preset so I was sort of expecting it to work.
Suggestions on where to look for the solution?
Cheers,
Barend
-
RE: "Best" rigging approach for working with mocap data?
Hi Sassi,
I figured I'd start learning to rig the skeleton and figuring out how to connect mocap data.
I saw a template in the Character Object for Mixamo rig - so I gave that a shot - assuming it would help with linking Mixamo mocap to my skeleton. The rigging process was rather painless and works pretty well already. Indeed vertebrae end up grouped along a couple of 'bones' but that should be fine.
Importing a Mixamo mocap clip (just the animation) I run into the next challenge: my character is rigged in T-pose but the animation starts in a different pose. After applying the mixamo animation to my character it's moving, but only relative to it's T-pose... I'm sure I'm missing something here - I'm reading a lot about some mocap systems always putting a T-pose on the first frame to tackle this. But I'm assuming there are other ways.
It's been a while since I used motionclips so those tutorials are a great resource for the next phase!
-
"Best" rigging approach for working with mocap data?
Hi,
For a project I'm going to work with a lot of mocap data, for human-proportioned creatures.
This is entirely new to me. I'm looking at some tutorials using uploading to Mixamo for auto-rigging, but autorigging fails on my test model. The creatures will be translucent with visible bone skeleton so my starting point was a skeleton and I suspect Mixamo doesn't like it.
So I'm probably going to manually rig a character in C4D. Here I'd like some advice. There is for instance a fully rigged human skeleton in the Content Browser, but it uses fairly complex rigging beyond a 'simple' bones rig, making it less than obvious on first attempt how to target the mocap data
So would I be better off building a 'classic' rig with bones and weightpainting, or is there a good way to link the C4D Character Object to mocap data (and would this have benefits?).
We'll probably use an Xsense setup for mocap. I've seen some videos that make retargeting the Xsense data to a standard Mixamo rig look quite straightforward. So I'm thinking to use the standard Mixamo bone structure and rig my character onto that.
Cheers,
Barend
-
RE: Using a redshift shader to control Shader Effector in mograph
Here's an impression of the direction I'm heading in.
-
RE: Using a redshift shader to control Shader Effector in mograph
Hi Sassi,
I completely understand.The thing is that in the end it will be a lot of very complex scenery with fairly complex geometry (think trees, cathedrals). So I'm currently working on developing a pipeline that allows me to bring in the models/geometry, and develop the final look in a fairly procedural way. And ideally in a single multipass render. This is why I'm hoping to avoid baking and vertex maps.
But I've figured out how to keep the base geometry invisible while still sending the curvature pass to a custom AOV - which allows me to use the curvature output (which works better for my purpose than the AO pass) to tweak the look in the composite. Not in particle distribution per se, but in luminance. Which seems like a workable compromise.
-
RE: Using a redshift shader to control Shader Effector in mograph
Thanks, I suspected as much. Some of the geometry is uhm... elaborate. So I'm not sure baking the curvature may be feasible for part of the project. I might try a different route by overlaying the curvature pass in compositing...
-
Using a redshift shader to control Shader Effector in mograph
Hi all,
I'm using a Matrix object to generate particles. The particles are spread aross the surface.
I'd like to generate more particles where there's more detail.
I figured I'd use a shader effector to try to control the density of particles to specific parts of the geometry. I can't seem to do that, but I can contol particle color and scale.
This works fine with a checkerboard shader or a gradient in the color channel on a classic C4D material as a test. But it won't work with something like an AO shader.
What I'm really looking for would ideally work with the curvature shader as found in Redshift materials (which in effect highlights sharp details).
It seems that the Shader Effector doesn't work with Redshift materials at all.
So I'm interested in other solutions to either control the density of the particles in the Matrix object based on the amount of detail (so flat surfaces don't get a lof of particles but corners and details do), or at least control their color/transparency/scale.
Any thoughts?
-
RE: Simulating a LIDAR pointcloud look in C4D
Ah - parenting a PLAIN effector (with scale set to -1) with a spherical field to the camera does the trick!
-
RE: Simulating a LIDAR pointcloud look in C4D
Hi Sassi, thanks!
I was indeed thinking along similar lines - although I'm complicating things a little further by using lights to reveal certain areas (basically generating lots of dots, but only seeing the ones that are lit).
I'm getting pretty close to the base of the look that I'm going for. However the particles have real-world dimensions which means that the foreground particles show up as spheres/circles rather than dots.
Is it possible to scale the particles relative to distance to the camera? Maybe using Xpresso?
-
Simulating a LIDAR pointcloud look in C4D
Hi,
In short: I'm looking to create something that looks like LIDAR scanned pointclouds, from C4D geometry scenes.
I have in the past dabbled with the LAZpoint plugin but it was cumbersome and seems to not be supported anymore for recent versions of C4D.
There are some things that I'm trying to achieve here:
-
I'm really looking for the 'occlusion effect' from having 'shadows' from the LIDAR scanner. This incompleteness of the scan caused by foreground object shadowing the background.
-
I'd like to be able to bake some colour or ambient occlusion from the model into the particles.
-
Render in Redshift due to other elements used in the scene.
For now I'm tinkering with a TP matrix object to generate particles.
So I have a simple scene, and populate the geometry with particles using the matrix object. I generate simple point instances.
Using the Redshift Object tag I make the source geometry invisible to the camera, but leave cast shadows on so that when I put lights in the scene (to simulate the LIDAR scanner) only the particles that catch light are visible.
Once I enabled backlighting translucency in the Redshift material for the particles, I get a good start on what I'm trying to achieve. The downside is that I'm generating huge amounts of particles that remain invisible because they're shadowed. But the upside of this is that I can quickly move some lights around to explore how I want to light the scenes without having to regenerate particles.
However, I'm looking for some advice:
-
I'd really like to have the particles inherit some of the color of the source objects. I figured I could use the Shader Effector for this like so:
https://www.youtube.com/watch?v=1n4qnlkUMX8
But it looks like this doesnt' work with putting an AO effect in the color channel of the material. And it seems to work fine with matrix objects but not with particles. -
Ideally I'd render as blips without size, but if I get really close to an object now - the dots become circles.
-
Or is there an entiry different solution to this that I should explore?
Thanks in advance.
I'm on C4D R2023 with everything up-to-date.
-