Thanks for the reply, entry-newspaper,
Let me know how it goes.
Enjoy your weekend
Thanks for the reply, entry-newspaper,
Let me know how it goes.
Enjoy your weekend
Hi entry-newspaper,
The first step is (if possible) to get back to the file's creator. If it is based on an ACIS engine or similar, then it needs only different parameters, and I'm sure the creator likes to see the result in the best way.
Other than that, check which formats they can deliver and adjust the quality on your own. The typical format is IGES
https://help.maxon.net/c4d/2025/en-us/Default.htm#html/FIGESIMPORT.html
See Tesselation
Other CAD files are often based on other requirements, like material volume (CAAD), and do not have a mathematical base. Often, remodeling is the key to getting something workable. (Remesh or Volume comes to mind). What to do in those cases is a longer story.
Given the worst case, you got the file and no way to get in touch with the producer. Here, I would just cut the profile you shared, select the new edges, then Edge To Spline, and orient that profile accordingly to use it in a Sweep.
The Arc can be done with a Spline Primitive. Here is my quick example.
CV4_2025_drs_25_MOcf_01.c4d
You can adjust the quality quickly with the Arc Spline and the Point interpolation.
Then, use the Current State To Object function.
Only two "bevels" were needed and can be added via selection and bevel Deformer.
All the best
Hi Balakaya612,
So, the error is based on using two Output Objects.
My suggestion would be to write OpenVDB files out and use materials as you need those. This is a wild guess, because:
I struggle to see how that is even remotely working in a believable visual way (not to talk about physics by now), as the different scatter results must be based on differences in the sources. How those sources and their obviously and requested different results (e.g., Scatter) would not interfere with each other in such a dynamic scenario is beyond me. But with separate setups, that is not a give; perhaps introducing Particles into the mix might help. My theoretical (or call it a wild guess) conclusion is that it will not work if I just use two independent sources.
Perhaps the best way to explain it would be to use the dynamics and complexity of weather prediction, which is also based on air movements. Which is so complex that we struggle to make long-term predictions.
Now, with sources, the turbulences and the complexity are limited but interactive. It does not influence the other; that influence affects an infinite loop. How much that can be recognized is unclear to me, as it is a question of perception and the level of detailed observation, but it surely is not a single bilateral effect.
Perhaps you have some reference photos that show how that works so I can explore it.
Cheers
Hi entry-newspaper,
Thanks for the scene file.
You can use a Simulation Scene to "isolate" the setup
https://help.maxon.net/c4d/2025/en-us/Default.htm#html/OPBDSCENE.html#PLUGIN_CMD_1057221
or
Drag all the Forces that should not affect the Rigid Body in the Force Tab> Mode: Exclude. (Note that only one option can be used; include or exclude.)
https://help.maxon.net/c4d/2025/en-us/Default.htm#html/TRIGIDBODY-RIGIDBODY_PBD_FORCES_GROUP.html#RIGIDBODY_PBD_FORCES_INEXMODE
I tested both in 2025.1.3, as things might change, and both work.
All the best
Hi Dutchbird,
The idea of clones is based on general, hence the missing default for local change, or more precisely, to change just one individual.
Here is an option to move them around for a single clone, a selection, and a Plain Effector each time.
With this, you can move a single clone or even rotate it.
CV4_2025_drs_25_MGso_01.c4d
The next one uses Weight to move clones around, this time based on how much Weight is painted on a clone it moves a step more in one direction. Here, it is done in a positive direction; the negative would need to double the setup.
CV4_2025_drs_25_MGso_11.c4d
Another method is to use a Tracer to connect all clones (Objects, then use the Current State To object to get a Spline and use that as the Object in a clone. I have no idea how your setup is and if the slight rotation might be a problem. However, a single point/vertice of the Spline and Object can be moved.
CV4_2025_drs_25_MGso_21.c4d
The typical way I do it (mostly) is to replace all objects with a Polygon object, then make the Cloner (preferably a copy) editable, and place all Polygons under a Connect Object. Then, use that in a New Cloner and set this one to Polygon center. With this, you have the most control while perhaps taking advantage of the Cloner Instancing options.
CV4_2025_drs_25_MGso_31.c4d
My best wishes for your project
Thank you very much, David, for your feedback.
I'm glad that works for you.
My best wishes
You're very welcome, Arnaud!
Thanks for the feedback.
My best wishes for your project.
Hi Arnaud,
Thanks for the kind feedback.
Please have a look here:
CV4_2025_drs_25_ANco_11.c4d
Switch the Volume Objects on after you run the time project one time from frame zero to the end.
The Nulls are either a rig or the two information sources for the Tracer (one main, one rail)
To keep the current position of the Tracer result, I rotate the setup in one direction and then all of it in the opposite. I hope that makes sense.
This is a solution with no math; you just create a volume and subtract it from the cylinder.
Enjoy
Hi walk-hour,
Please note that I write in a forum, and whatever I write is not an evaluation of your work but more an attempt to share as much as needed to get even a beginner started.
Preamble:
What is it that you try to achieve? Yes, you say Photorealism, but is it Realism, Hyperrealism, or Photorealism? Photorealism is quite a wide field, and I believe you will get many different answers to that idea.
Lenses are often the most crucial part of the Photorealistic quality. Since film time, the digital way of defining this has changed, and lenses with clear artifacts are commonly seen as more able to provide photorealistic ideals, as they leave a specific "feeling" in the image, mostly the opposite of being clinical.
. Lenses are typically a compromise, except for a few - often quite expensive ones; the great exception is Sigma's Art 40mm F/1.4, which leaves nearly no signs of any influence other than being nearly
perfect. I mention that as the term has changed over time. However, there are certain ideas about what it is; it depends on who you ask. It is similar to Filmic, which is an endless discussion for many.
What is it?
So, what is the idea when the term photorealistic is used? (Part of my art education at the University of the Arts in Berlin (MFA) was Photography (3 years photography, then 3 years cinematography). Even after decades of shooting, I would not dare to pin that term into a small definition.
Long story short, using photorealistic is not a simple idea; the more one trains one's perception, the less likely one is satisfied with anything.
However – there are several resources to answer that:
The Complete Guide to Photorealism for Visual Effects, Visualization and Games 1st Edition, by Eran Dinur (Author)
He fills a whole book with the idea of Photorealistic, and I enjoyed reading it. Is that all there is to it? No.
Or a tutorial/presentation series:
https://cineversity.maxon.net/en/tutorials/rendering-interiors-and-exteriors-1-4-create-with-maxon
Your image:
Your image would work more towards Hyper-Reaslistic, as no sign of any optic nor filtration is there, and the visualization is very generative (Nothing wrong with that. To boost you with that, explore Ed Ruscha, a celebrated artist in L.A. and beyond.
The 400-pixel height image doesn't allow for many investigations, especially with the artifacts from compression.
How was the edge treatment to catch some light? Hard to tell. There is no sign of color grading. I did not find a light wrap.
Typical elements in Compositing are often missing in a single render result; even compositing is not the theme; the qualities to match an image are, as it is the same as adapting a rendering to an image or working towards a photorealistic.
https://cineversity.maxon.net/en/series/integration1?tutorial=1_integration_introduction_01
I typically avoid the term photorealistic, as you might imagine by now, as it is blurry at best. Leave the lens cap on the lens and get a photorealistic black image.
Reference:
What better reference could you get than from the Masters of Architectural Photography, and here rely on books? Web is not the best idea.
The key is your references, the one you shot, not anything from the web. Then you know what you get. Considering you are savvy with a camera.
If you want to have 3rd-party references, books about architecture are a good study. Typically, the VIP in that genre attracts the best photographers.
The parts you need. To look at why your example looks hyperreal and not as you wanted. Some things rendering engines do not deliver, and why often Compostings are the only way to work.
Here is an older series of mine that explains ten main qualities to merge things into reality:
https://cineversity.maxon.net/en/series/integration1?tutorial=1_integration_introduction_01
Subjective treatments. Again, I have no idea what you have in mind regarding photorealism. This is a quick (a few minutes) Photoshop treatment of your 400-pixel high image from above. Compare it.
I hope the content suggestions allow you to find your aesthetic and create a signature look with your work over time—my best wishes.
All the best
Hi Arnaud,
The formula in the video would find its way into a Formula Effector and is then roughly this:
sin(((id / count) + 0) * f * 360.0)+(id/count)
This means a Sine curve that adds slightly over time (the +(id/count) part).
Based on that, I created the Loft-Cloner setup and harvested from it the Spline that drives the "knife" or "tool".
Here is an example:
CV4_2025_drs_25_ANco_01.c4d
My best wishes for your project
Hi David,
Please look into Takes and/or Layers for that kind of management.
Takes
https://help.maxon.net/c4d/2025/en-us/Default.htm#html/54507.html#PLUGIN_CMD_431000087
Layers
https://help.maxon.net/c4d/2025/en-us/Default.htm#html/11074.html#DIALOG_LAYERMANAGER
The simplest way is often to group those and switch only the "Parent" to the state in which the group is supposed to behave, as you wrote. Why it is not working? Do you have an example.
All the best
Thank you very much, Balakay612,
Fingers crossed, there is a simple answer to your observation.
With the Area Light (set very bright), I didn't see that problem, but it was a testing scenario. I would also encourage you to explore what sampling changes the outcome.
All the best
Hi entry-newspaper,
Thanks for the reply; please let me know if I got your target with this setup and if that works well enough to get what you want.
Cheers
Thanks for the feedback, Dutchbird!
I hope the change of all the objects doesn't take too long.
I think you are aware, but since I write in a forum, once the selection and the Weld were used, the Space bar is the fastest way to switch between the two.
This has helped me get things done when I clean up complex models.
All the best
Hi profit-sign,
As many times mentioned, technical problems that need a developer to change the code are not the focus of this forum. For that, we have tech support.
I get that this is an annoying problem for you. Going by what I see, it is not for many people so far.
The few tickets issued about this are in the system and have not been ignored.
Repeating this here is not the place to make more happen. This forum is about asking questions about the functions or looking for creative solutions.
If you like to place any pressure on that, I advise to go the official way, and open a ticket:
https://www.maxon.net/en/support-center
Again, I understand that this is a feature your workflow relies on. But please use the way designed to report technical problems that can't be solved here.
My best wishes
Hi teach-control,
Is this perhaps working faster and better?
https://www.youtube.com/watch?v=HS2S5SleiHE
I have done it each time I get this question about whether a new version is available. So tonight, with 2025.1.3. I have used this workflow, I guess, 100+ times by now, at least.
It works fine as long as you go with the exact steps. A little derivation and it might fail! Watch the Quick Tip, write each step-down, and compare these notes; I can't tell how often these steps were not followed, and frustration was produced otherwise.
What sometimes is in the way when a Mixamo FBX rig uses Takes? Go to Takes> Select the Mixamo Take, and right mouse clicks "open Take in new Project".
I assume that is the biggest problem over the years.
The automatic works also only if the rig of Mixamo is named correctly; this is not sometimes given with older files.
mixamorigs:
These are the two most important items that lead to solving a problem over many years.
Please let me know how it went.
All the best
Hi Balakay612,
Thank you for the file.
I get an error with your scene. I have tested it intensively.
Would you mind checking with tech-support?
https://www.maxon.net/en/support-center
Thank you!
I have filed a report.
To my knowledge, you can have several Pyro Outputs, and the different Scatter values change only the reaction to light. Set the Color back to white and the material> scatter to different colors to see what does what, while adjusting each. I did this with an Area light (visible behind Pyro)
Sorry that I can't say more, as Redshift shuts off here on an M1Max-based computer.
All the best
Hi entry-newspaper.
To be clear, I take every word of your opinion seriously regarding this theme. I do not see it as a fantasy but as a practical vision.
Yes, it bugs the hell out of me to write replies that there is no solution to my knowledge. Of course, there is, but if it is a solution, it may be a question of quality and how much one can enjoy a good mesh flow.
How that might work in the future: Perhaps converted into a high-density mesh, moved the Texture to an RGB Vertex color, then thrown into a Volume workflow with a Remesh as output, and the Vertext map transferred to the new mesh, Backed, and meshed to lower density, via VAMP UV mapped. They all were in a capsule, and the MovesByMaxon result was dragged underneath in one step. That will not fly today, but as long as people write "Share Your Ideas" to
https://www.maxon.net/en/support-center.
Is that what a client could do with a push of a button? Sure!
I hope we will get this one day, and perhaps it will be written in faster code than my Frankenstein Capsule from above.
Hi entry-newspaper,
Thank you very much for the file!
You want no tile pattern following your input, but the setup is prone to show that. Given that pattern recognition is relative or subjective for anyone in a different way from the audience when patterns are not wanted, it is not advisable to risk this. Given that the typical advice for tileable textures is that one should never see more than 1.5 times at once, the setup has 100 tiles. How many are shown is not provided, but the idea of the larger "scale" randomness indicates that it is more than 1.5.
So, it is a no-go if tiles are not the intention.
My suggestion is to keep it simple. Please have a look here:
CV4_2025_drs_25_MGlg_01.c4d
Two plain Effectors are set up in two different ways for random. Switch one on while the other one is off to see the difference.
Both together, if adjusted as intended (the 90º setting is for the demo), for a "harmonic distribution," the Cloner moves here -45º, also just for the demo, so the Cloner moves in one direction, and the Plain Effector the other. Like the Min-Max values of the Random effector.
The key is in the scale of the Maxon noise in the Shader Field. The Noise allows for the widest variety when it comes to such effects.
Enjoy.
P.S.: I have not found that the RS Object Tags are always faster than Render or Multi Instances. Chances are high they are, but if that is for random or nonrandom clones, it doesn't make a difference here. Anyone always tests: General ideas allow only average results over time.