N-Gons are great during manual modeling. But for "procedural" modeling like with the Boole, they often give too little information (as in Points), and so the Boole needs to take what points are available. In that way, even that little cut work might solve a lot of less wanted polygons. The worst case is super long polygons; the best is, of course, a perfect flat square. Typically, we avoid the long ones and work to get close to the square. Boole is certainly not my favorite tool, except I can use the Remesh after that if it improves the provided Boole result.
So, to give the Boole a bit more to work with, I took a few seconds to take out the N-Gon.
Switch the following "High Quality" and "Hide New Edges" in various combinations. It might not be a lot of improvement, but it is often enough to get something that works instead of a more problematic result. CV4_2024_drs_23_MOng_01.c4d
I used the sides and the back from the Loft and anything but the sides and back from the Extrude. Then, I merged it and used the Optimize function.
I used the Mesh> Move> Magnet Tool to shape the "subframe-spline".
The best base to get very fast with this is to study Rotoscoping. In a nutshell, it separates shapes into forms with the least amount of change and moves them from the "keyframe" (the point of main change, PSR).
Besides the Cloner, the PoseMorph Tag might be the way to go. The Point-level Animation (PLA) seems more accessible for the first run-through but becomes increasingly difficult to adjust.
Please, no follow-up question that doesn't match the previous one. This will lead to a more challenging search and difficult-to-read forum.
The image that you show can fill books. Suggestion:
The Complete Guide to Photorealism for Visual Effects, Visualization and Games: For Visual Effects, Visualization and Games 1st Edition
by Eran Dinur (Author)
Please one item to discuss at a time, would you mind open a new thread, and I move the new post into your new thread. Thank you!
Open as many threads as you need!
Screen Shot 2023-11-22 at 1.35.53 PM.jpg
Screen Shot 2023-11-22 at 1.47.17 PM.jpg
As a side note:
My initial tip with Bevel was based on the guessing (missing the image) that it was perhaps based on the German term "Grat" (English Burr). Before I studied Architecture, I went to a Metal/Electric school, and in our practical assignments, we had to get the Grat (Burr) off the Steel edges. (entgraten).
Since I have gotten questions from all over the world in the past two decades, I often have to go with a guess. But yes, your term was absolutely correct.
The first one uses short comping of splines and animation to its advantage, such that the distance between points is crucial for the speed.
The Force Field has a lot of potential, and adjusting the "force" along a spline is very powerful.
More about Force Field can be found on our team's many shows over on YouTube. https://www.youtube.com/c/MaxonTrainingTeam
Let me know if there is anything else. I have some time off (Holiday here), but, of course, I have an eye on the forum.
Yes, Magic Bullet Looks is a great place to explore, brainstorm looks, and set up one's idea about a specific Look.
The attention to detail allows for creating a filmic look; it uses everything to guide the audience's eye and limit distractions. With that, less stress is provided, and one can sink into the visuals. This intensity is often confused with the prominent, easy-to-read part of the image. Hence, just applying for a LUT is not working.
Hence, we talk in color grading about secondary grading, something neither a parameter preset nor LUT can do.
On top of that comes handling all clips to merge into the found visual language to create continuity.
Many people use references, as you mentioned as well. Analyze those and compare your results. Step away for a day and look again. You might be surprised how much more you see after a while.
Johannes Itten's "The Elements of Color" was the book that most impacted my understanding of Color as a young art student. Even decades after I bought it, it echoes in me. Many mainstream film looks are built upon this knowledge. Understanding those unlocks why some look work along a movie, and some just fade over time in one's perception.
You might see instantly that the 2024.1 FBX export produces much smaller file sizes based on the missing data. This clearly shows that the exporter in 2023 should be used for now.
I hope that is an option for you for the time being. In other words, the scene you use does not use any features that are only available in 2024.
Sorry, I have no workaround other than how you described opening the 2024.1 file in 2023 and exporting from there.
The FBX import from 2023 to 2024.1 works
Fingers crossed, a working version will soon be available for you.
The mask is an option to exclude moving objects, but it has no option to retrieve spatial information.
Tracker points will be created for contrast areas; if it does not match all other's movements, delete it.
Parallax is the "perspective" change in the image. The opposite would be footage from a lock-off (Tripod) footage, which doesn't provide data on the space. Also, just paned (and similar movements), and zoomed footage doesn't provide parallax. (Yes, I'm aware that GoPro has no zoom options)
Points that are manually tracked should sit on non-changing features. (I mentioned that in my YouTube link.
Camera Motion Tracking creates a 3D point representation of the stable world in front of it. Once solved, that "cloud" is static.
The idea of tracker points is triangulation. Lens distortion leads to variation in the speed at which these points move. This disables the precision of a stable triangle or leads to a useless result.
It is possible to track purely with manual points. However, if the footage and lens profile are not pristine, the rule that 8-12 tracker points will most likely not work.
With post-processing already in-camera processed material, you have two layers on top of problematic footage. In other words, the perfect camera/lens features are based on Global Shutter (not rolling shutter), while no stabilization is used, and the lens has minimal distortion. GoPro is not the camera for this work. Footage with no motion blur (and no added motion blur) is preferable for this work.
Since you mentioned After Effects, try to find the spatial camera path there, then merge that tracked camera in a Cineware file that allows you to get the result into Cinema 4D. (I haven't done it in a while, so this is brainstorming.
Side note: When you place a new object in the Object Manager, remember it goes from the top down on each priority level. Placing objects over others that need to provide information might lead to delays.