CINEVERSITY

Group Details Private

Global Moderators

Forum wide moderators

  • RE: c4d thumbnail doesn't show on MacOS on a MBP

    Hi philosophy-chemical,

    Please have a look here:
    https://discussions.apple.com/thread/255762577?sortBy=rank&page=1
    Add your vote please.

    In my private opinion, it seems like something Apple needs to address. Yes, there could be a workaround, but that might be eliminated when Apple fixes it. Meaning, all previews set up with an app based solution might miss the images then. This puts every non-Apple developer in a difficult position.

    Yes, having no solution after so many months is annoying; I agree.

    My best wishes.

    posted in Question & Answers
  • RE: Lenticular Film effect

    Hi AlexC.,

    Yes that IOR. To calibrate the "Lens" I would set a camera straight to the image and far away, with a 500 or 1000mm focal length. Then dial the IOR until the image is close to the original (meaning it is not stretched and does not contain parts of the other image).
    (Image below shows too much magnification)
    Screenshot 2025-07-25 at 10.44.44 AM.jpg

    Yes, three is possible, with three stripes and a higher IOR. Three stripes means slimmer stripes and more magnification. Here, the feeling of these Lenticular Postcards becomes more apparent, and its imperfections deliver that feeling even more.
    If these artifacts are too prominent, then the lens needs to be smaller, or perhaps shaped differently

    Here is a snapshot from a postcard I bought in the spring while visiting Germany and the Filmmuseum in Frankfurt. If I'm not mistake it has five images in use. This also demonstrates the analog artifacts of this technique very nicely. In my opinion, perfection here would ruin this.
    Filmmuseum_Frankfurt_IMG_0620.jpg

    I assume that the aesthetic of this effect should be evident, if it is just a blend of images, which could be done faster, as mentioned above.

    Example with three: (Images for demo use here only)
    https://projectfiles.maxon.net/Cineversity_Forum_Support/2025_PROJECTS_DRS/20250725_CV4_2025_drs_25_ANll_03.zip

    My best wishes for your project

    posted in Question & Answers
  • RE: Trouble with Connector Tag

    Hi nopopoch,

    Thank you for the file and the care in the file.

    To have two objects "cap" to "cap" connected via a Connector is relatively close, not like superglue. Perhaps not what you expect, at least that this will get anywhere near what I get from the text bubbles above them.
    Suppose they were 100% like a belt, a stretching object would have an influence on the connected part. Which needs to go in both directions, and enable a Feedback loop. That is my observation so far. Below are a few examples, with or without animation. Please mess around with them to see how some parameters influence the results. Yes, the documentation could have a table with all combinations and interaction results.
    CV4_2025_drs_25_SIct_02.c4d
    CV4_2025_drs_25_SIct_03.c4d
    CV4_2025_drs_25_SIct_01.c4d

    Documentation
    https://help.maxon.net/c4d/2025/en-us/Default.htm#html/TPBDCONSTRAINT.html?TocPath=Simulate%2520Menu%257CThe%2520Simulation%2520System%257CThe%2520various%2520Simulation%2520tags%257CConnector%2520Tag%257C_____0

    It tells, like for the Rigid Body, what to expect with the Connector.

    The question would be, why split something that needs to work anyway, like one object?

    Here is an alternative, just joints turned into Dynamic via IK tag, since your file showed what you like to have, but no context, this might work, or is completely off.
    CV4_2025_drs_25_CAdy_01.c4d

    I'm not sure if I was of any help here; a little bit more context might help me.

    All the best

    posted in Question & Answers
  • RE: Lenticular Film effect

    Hi AlexC.,

    There are surely a lot of ways to "fake" this, as it could be just a Layer with a mask, whereby the mask value is an evaluation of the Normal, or a translation of the camera.

    The main effect is based on Lenses that magnify the image below them. The image below must be split per lens. The lens will magnify the stripe, and that means for the strip, it needs to be smaller to "survive" the magnification and scale process in balance, meaning providing a full image. In your example above, that would be a magnification of x2. Magnification can be changed by changing the geometry (given a Material provides refraction) or leave the Geometry as is, and calibrated the effect via the IOR setting in the Material (Reflection). Note that the image stripes and the lenses have to follow the point of interest, in the example below I have pushed the camera into the distant and the set up is pretty much for a parallel view. (This might be nit-picking or nerdy, but those fine adjustments might help, however, I got with parallel first.)

    How to get the stripes without diving too deep into UV> The MoGraph Poly FX allows here to scale each polygon strip to half (for the x2). This leaves one image with a lot of gaps that need to be filled. So the same Plane and PolyFX, but moved to fill the gaps. The scale for both, and the move for the second, is done with the Plane Effector for the PolyFX.

    Please have a look here, I have used two of my images: an Infrared image and one that is purely done with Ultra Violet light, to have two extreme color palette and make the difference clear.

    Screenshot 2025-07-24 at 3.12.58 PM.jpg

    The images can be added to the original size of the Plane, as the UV Polygons are the same, and the geometry is only scaled and moved.

    Please have a look here:
    https://projectfiles.maxon.net/Cineversity_Forum_Support/2025_PROJECTS_DRS/20250724_CV4_2025_drs_25_ANll_01.zip
    (Images for demo purposes only, thank you.)
    This effect requires rendering.

    The same technique can be done with more images, meaning slimmer stripes and stronger magnification.

    This technique was used as well for 3D images, where the lenses and the image need more attention to "split" the left and right images for two eyes. The linear distribution must be adjusted here.

    I think 15 years ago, there was a video screen with this technique, and seven (or five?) cameras were needed to provide a naked eye viewing 3D experience. Same tech.

    Please note that there are larger paintings done with this "effect" but with "rips" (triangle shapes) on the canvas, which allowed for this left-only or right-only viewing.

    Enjoy

    posted in Question & Answers
  • RE: How to recreate a proximal shader effect in Redshift?

    Thank you very much for your reply, atomician.

    My best wishes for your project.

    posted in Question & Answers
  • RE: Procedural Brick Generator

    You're very welcome, ne.____.il,

    Enjoy your project

    posted in Question & Answers
  • RE: Inquiry About Smoother Conveyor Belt Animation Using Cloner Offset

    Hi Gloria,

    This is a new question, which should be in a new thread, otherwise, people looking for a specific answer have to read through perhaps related but not targeted information. Thank you for considering it in the future.

    My computer would not even start, and gave me an Alert.
    Screenshot 2025-07-24 at 9.58.35 AM.jpg

    From a general observation: There is the idea of just placing Simulations on things, and all is solved while it stays fast and stable.
    The number of parts and details the conveyor belt has in your example is quite large. When the object comes in the hundreds that need to be calculated, it will take a lot of computer power, but also with the increased complexity derivation from the hoped result will show up earlier. Also in general terms, more often that place a Sim on things works. So, the artist is required to decide when to switch to a dual (split in technical and visual) set up.

    For 3D animation, from my earliest hands-on classes, I tried to help people overcome the idea that what you see is what needs to happen in the Scene. That is cryptic when shared in that short way. For your project, being a very detailed animation model, I would suggest splitting the animation into the simulation.
    As in the first file, I used a proxy for the editor view to give a reasonable playback speed for the Scene. For the rendering, the fully detailed part was then used.

    Here I would leave the animation as is and reduce the Conveyerbelt to a single Plane object, animated in the same way, so you could even connect the Offset values from one Spline wrap to the second, right mouse click (RMC) Expression> Set Driver, second instance RMC Expression> Set Driven Absolute.

    Next step, the Scene scale is important for the simulation to be more optimized. Your objects are small. So I used the Scene> Simulation> a small scale.

    When you are in the Scene settings, I left the Draw options on, as I used for the Collision Shape of the "Particles", the Box option, which is surely the fastest. (The Draw can be switched off, they have no other function that to inform you.)

    The Speed in the Particle Emitter plus the Gravity creates only problems, while moving away from realistic behavior. I set the Speed of the Particles to zero, and switched the extra Gravity off; the Scene Settings have a Gravity setting already.

    For the Collider set up, I increased the friction while also setting up a Stick Modifier. Explore what is needed.

    The Particle Geometry has a lot of Selection Tags, each will add data to the Scene. Are they all needed?

    In short, the Simplified Conveyor Belt has no need to be rendered; it is just needed as information. In this way, you have a stable animation and a fast simulation. A tiny bit more work, but it will save time.

    Here is an example
    CV4_2025_drs_25_SIcb_01.c4d
    CV4_2025_drs_25_SIcb_02.c4d

    All the best

    posted in Question & Answers
  • RE: Procedural Brick Generator

    Hi ne.____.il,

    Thanks for the file and for using Dropbox.

    Here is your file back. Change the orientation and other elements as you like; this is just an example.
    CV4_2025_drs_25_MGbg_02.c4d

    The idea is simple: the first cloner blends between the two setups. Both are identical, except for the Surface-Clone Seed number.
    The Shader Effector allows you to set random values to each clone created on the TopCloner, while the Blend options in the Child Cloner.
    The Blend works here with Modify Clone of the Shader Effector, and its Noise from the material.

    Screenshot 2025-07-23 at 8.03.40 PM.jpg

    In this way, you should get a good variation of the individual shapes. If there is any problem, change the noise scale.

    Please let me know if that works for you.

    Cheers

    posted in Question & Answers
  • RE: How do I repeat a ramp pattern on an object using the texture node?

    Thank you for you feedback, tear-one.

    Enjoy your project

    posted in Question & Answers
  • [Help] Creating Eye-Catching Carousel Presets using Maxon Studio (1/4)

    View Tutorial
    Tag: Red Giant | null

    Thank you for taking the time to report your issue with this Cineversity tutorial.
    We apologize that you encountered an obstacle with the material. We’ve created this template to help you clarify the issue you are having so we can assist you more quickly.

    What's the specific issue you are having?

    Please include a timestamp (where in the video the problem occurs) if applicable.

    What are the steps to replicate the problem?
    What steps are you taking that seem to cause the problem?
    What version of the software are you using?
    Any and all info is helpful. Thank you.

    If the question is not directly related to the tutorial, please refer to the Q&A forum

    posted in Tutorial Discussions