Compositing Trapcode Particular into 3D scene with moving camera?
-
Is there a way to output a depth buffer of some sort as a Redshift AOV that could then be used as a Z-buffer in Trapcode Particular so its particles can be properly obscured by the 3D objects in the scene?
In the past I've been able to kind of make a Z (Depth) pass AOV work, but you encounter problems when the camera is moving. As it moves forwards or backwards the buffer values change, messing up the matte for Particular.
Imagine you have a row of columns extending along the z-axis into the distance. I animate the Particular emitter to weave between the columns, and the camera flies along the columns following the emitter.
Or to simplify it more, imagine a single column with particles weaving around it. Then the camera dollies back. The values in a basic Z AOV would change as the camera gets farther from the column, messing up the Particular Z-Buffer.
There's also the World Position AOV, and setting that to extract the Blue value to generate a black and white map seems like it might work, but I don't know how you'd dial that in to work with the Particular Z-buffer.
Is this possible to do in AE with Particular, or am I going to have to fire up X-Particles in C4D?
Thanks!
Shawn Marshall
-
Hi Shawn,
The theme might be a bit more complex, given the variety of objects in a scene, and a moving camera.
Imagine a few objects that described the 3D space while now being just 2D image-based information. The created 3D particle can be in groups before and some behind objects. At the same time, objects could have been covered by other objects behind them. Or the camera circles around objects, changing what the object hides or shows constantly.
I would just say impossible.
Now take a camera move along these objects, creating parallax and showing what was obscured at the beginning, not in the field of view, without anything between.
A world map (Position Pass) has no idea about objects not in the image. After all, 2D images take all information away from 3D that was not in the moment the image was taken/rendered visible.
An indicator that this is an impossible task for a single 2D data pass is the pure existence of Deep Pixel-based renderings with extreme file sizes based on containing lots of spatial data.
If that is not convincing, consider the camera a light source. Objects create shadows in this example. The shadow is the place where we do not see the Particles. By moving the light source (camera), the shadows change drastically. But that is not inside the image representing the camera view or light in this example.
Currently, this is my best answer. I have no idea if I can think of something simple with a free-moving camera.
All the best
-
Thank you for replying. I was under the impression that each pixel in a Position pass contained data as to where that pixel existed in the 3D space of the scene. Set up correctly it seems like that could tell Particular whether a specific particle would be visible or obscured. I seem to remember a tutorial from 6-7 years ago demonstrating C4D's voronoi fracture that was enhanced with little dust hits done in Particular using a position pass depth map of some sort and a camera orbiting the scene, but I can't find it. For now I got a decent solution in X-Particles. Cheers!
-
Hi Shawn,
Yes, any object visible from the camera will show the world's position.
You want to move the camera, which will stabilize this information on non-animated objects.
What you need to make the particles invisible is the data that is not shown, the "Viewing Shadow."
While the camera moves, the parts of an object that is visible to the camera change. You can calculate the distance between the camera and the Object. Since you don't get the particle's position, you get nothing more than a 2D idea of the camera-object relation.
Let's say you have that particle position; you would be able to obscure all the particles behind objects, but what about particles that would bounce into objects, like the front round part of the cylinder? Where do the particles go when they become invisible, through the Object, and then become visible again later? Typically, they bounce, which needs complete information on all parts.
Is that creating a more precise "picture" of the problem?
Cheers