CINEVERSITY

  • RE: Particle Swoosh Controlling

    Hi table-state,

    Let me summarize here, which is meant as a neutral recap.

    I have no idea what eight cameras you are referring to. With each post,, more details become available. I haven't received a storyboard that shows me what you like, just a few images with lines where I have to guess which line is which.
    If the images are the "same", where comes the problem? Since you said video, but the building is (so far, I understand) still images, not video

    I have shared the FFD option shape, based on the initial version, which resembles a movement in Z while turbulence causes particles to move up and down. Later, it is a 3D movement with up and down movements.

    The use of Nodes did not receive any feedback either. So I try to understand what you like to have. I placed the two images on top of each other and calibrated the camera to see the difference. Which you exclude, as well as the solution to the problem.

    I have said it often before, Simulations are great, but art directing them in detail is not always simple. Here, the manual approaches come into play, perhaps even utilizing the results from the last frame, converting the Tracer into Splines, and then shaping the spline segments accordingly.
    Then, animate it with MoSpline. Now you have 100% control.

    The time spent handling a simulation as an Art Director or manually setting it up is a balancing question.

    An alternative would be an object (or several) that is moving along the spline and needs to direct the path of motion.
    While animating those objects in size or even deformation, you have control over them.

    The file and its curve, along with the larger or smaller distances to each particle, are very much aligned with a perspective scaling. Hence, the idea is to find the camera position and angle, lens, etc., to stop fiddling with adjustments and have it working properly, meaning along perspective options. However, even though all indicate that this is the target, the information given suggests that it is based on something different.

    As a side note, objects and their associated information that are needed at one point must be positioned above the object that requires it. Priority.
    /summary end. 🙂 All good, moving on.

    What I have now, start, middle, and end, have all particles coming close, among those "close areas" getting wide again.

    Why not use one or several objects as suggested below?
    CV4_2025_drs_25_ANfl_01.c4d
    Screenshot 2025-06-01 at 2.17.05 PM.jpg

    Enjoy the rest of your Sunday.

    posted in Question & Answers
  • RE: Camera tracking Background and Shadows?

    Yes, Capprim,

    As mentioned, I overshare here, as we are in a forum. You said you are working with tracking, but if I just imagine that you do all these steps, I might miss some, so I feel like sharing these details. Lens distortion workflow is something that comes to mind here. Without mentioning it, things might go wrong in many ways.

    The "background" has changed and is now inside the Camera.

    Please note that the problem with the background is that you need to have the Render settings exactly as the footage resolution. There is an option called "Fit", but that doesn't automatically produce a Pixel to Pixel match.

    You render a shadow with an object that resembles the floor, and that object needs an RS Render Tag placed on it.

    See, here I made the assumption that you know about setting shadow, and now I am uncertain about it. My mistake.

    All the best

    posted in Question & Answers
  • RE: Camera tracking Background and Shadows?

    Good Morning, and Happy Sunday, Capprim,

    This seems like a simple question, but it is typically the content of a ten-part series. I try to keep it short... 😉

    I might share more than you need, as I'm not familiar with your background or that of anyone reading along.

    Please keep in mind that the iPhone has a lot of things going on between lens and file. It is not always a "simple" case with iPhones.

    Not always the best idea. Besides, that tiny lens might have more "options" to have an axial shift than a full frame lens, a logical thing based on the size.

    Camera tracking runs along a lens distortion workflow. Here, the trouble begins when trying to do it completely in Cinema 4D. Tracking works with the original footage while using a lens distortion file. Which needs to be from the iPhone you have used, with the lens that you had set up. Any external data is useless.
    The footage for the background needs to be rendered with that distortion first.

    Why is that? Because background footage is done in post, not in the rendering, for various reasons, light-wrap, for example, keeping the original footage untouched. Consider that a lens distortion "fix" is applied to the footage; each pixel is moved, most likely in a nonpixel distance, meaning it gets mixed with the neighbors. Which results in you degrading a 4K pristine footage to a simple SD footage. Hence, the renderings, which are typically available in any resolution, will be adapted to match the original footage, not the other way around.

    To see the result of the background, the footage needs to be undistorted, like the tracker used with the lens distortion grid, but already back in for the background. Again, this destroys the quality. I mention that even many photographers I'm aware of apply lens distortion fixes as if that were without penalty. Therefore, the suggestion to use undistorted footage for the background is for testing purposes only, and any other use is highly questionable.

    The footage for the RS-Camera needs to be an image sequence, not a QuickTime file (such as an MP4 or other container format).

    In the Render Settings, you will find Redshift. In Advanced mode, the AOV (Arbitrary Output Variables) section contains an AOV named Shadow. This gives you the option to define the Shadow in a composition. There are numerous questionable ideas about this on the web, such as inverting and applying it via multiplication. This is not the best way to do it. The idea to have back for no shadows means zero for that area, and anything in the Shadow is above zero to one. This can be applied to a process, even to simple tasks like a Level or Curve adjustment. Here, you can match them comfortably in terms of the density and color of a shadow. Typically, a shadow is not black, except the blacks are crunched. We are in the era of HDR video, which means a huge number of values defining the blacks to achieve a larger dynamic range, not only with extremely bright content. For me, the real quality in HDR lies in the darker areas, specifically in color fidelity. Black shadows are super rare.

    If the tracking was done in Standard render, the camera might not match the needs of Redshift. Place an RS camera below the Standard camera and zero out pos and rot while the Scale is 1. Check the focal length and the Sensor size.

    All steps are done, as usual, with the current version (2025.2), as I do not answer from memory; things change, and outdated information from the past can cost you time, if not longer.

    In any case, for comfortable compositing of all elements, explore the Red Giant Super Comp.

    Enjoy

    posted in Question & Answers
  • RE: Particle Swoosh Controlling

    Hi table-state,

    No need to apologize. Your files allowed me to spot the problem.

    I can see that the images are shot from a different position, and the particles move in Space. However, you have only one camera in the scene.

    Since each image of the building requires a specific camera to represent its original Point of view/Point of interest, as well as focal length, using just one camera will not get you there.

    At the moment, you have the two cameras defined; based on each image, the particle results will match. Of course, if the particles are somewhere in Space, it will not work; they need to sit between the cameras and the virtual image plane.

    Take both images you like to use, not cropped, and calibrate each image. Calibrating the lens (the focal length, at least) and creating a Lens distortion might not be the deciding key in this case.

    Since I have had no full sources here, I can't exactly tell what the lens is, given a 36mm sensor, had a ~38 or ~42 mm lens. I got from one a distortion (tilt), so I assume I don't have the images to work with. But they are both not matching one position.

    From my point of view, setting up particles somewhere in Space while using only one camera for two slightly different image perspectives will never match; even if you get it close by massaging the particles, there will be a feeling of mismatch, and I assume that is the point of your question, in essence.

    https://help.maxon.net/c4d/2025/en-us/Default.htm#html/TCAMERAMAPPING.html

    All the best

    posted in Question & Answers
  • RE: Particle Swoosh Controlling

    P.S.: how about this?

    Using the main Spline as a Field and pulling particles toward it if they move too far out?

    The key parameters are the Radius and the Spline field, as well as the parameter of the Multiply Node.

    CV4_2025_drs_25_PAnm_01.c4d

    Screenshot 2025-05-31 at 11.20.32 AM.jpg

    posted in Question & Answers
  • RE: Rope Belt Lag

    Hi Sam,

    Thanks for the file and for using Dropbox.

    What I did was set the Iterations to 2. No particular reason other than a gut feeling.

    The next step was just to cache the tent itself. Nothing else. As the ropes react to the tent, that sets them, AFAIK, into a frame after mode. But caching the tent provides the needed information directly. (Yes, The Object Manager was organized correctly. You placed the priority to -400 instead of the 400 it has by default. ThisThis means that anything after -400 is calculated based on it, providing the Simulation with data from the frame before. Objects need to be generated before they can provide data. Here is a catch. The chain is a Simulation>object generating>reacting to that object>generating the robe.

    From my point of view, that needs the initial sim object to be cached, preferably the last Simulation as well.
    If that hadn't worked, I would have used the cached first object and converted it into an Alembic to utilize the time offset option here. But it worked without that.

    Then I went to the Ropes and cached those.

    Let me know if that works for you. Otherwise, I provide you with the link to the solution I have here; it is a bit large due to the cache.

    Enjoy your weekend.

    posted in Question & Answers
  • RE: Topic Category

    Hi Danielle,

    Since things change over time, and Forum's access is not my turf at all, I have asked IT. Fingers crossed, I get a fast reply.

    Perhaps leave your character question here; perhaps I have a link or something. Sorry, often this Forum is used as a backdoor, and I do not want to encourage this. It always puts me between chairs.

    Cheers

    posted in Site Issues
  • RE: Generate Spline from Boole?

    Thank you, entry-newspaper,

    I look forward to your findings.

    Cheers

    posted in Question & Answers
  • RE: Generate Spline from Boole?

    Hi entry-newspaper,

    Thanks for the feedback.

    The problem with those setups is that they would require an option to define what belongs to one spline and what to another and when to join those. Like the two legs in your setup, create two round shapes and then merge them into one for the main body area.

    I hope that helps, and perhaps we can look into more detailed needs separately.

    Enjoy your weekend

    posted in Question & Answers
  • RE: Particle Swoosh Controlling

    Hi table-state,

    I need a storyboard to better understand your target with indicators, where and how the images move, and how your ideas of size apply. I'm not clear after your second post what you really want.

    I can't help but feel that you can visualize the whole setup in your mind, but the text doesn't define it for me.

    Here is a wild guess, how about an FFD for the Tracer
    CV4_2025_drs_25_MGwv_11.c4d
    Screenshot 2025-05-30 at 11.15.01 AM.jpg
    Have a great weekend.

    Cheers

    posted in Question & Answers