Motion Tracking Problems
-
Hello there
I use Cinema's motion tracking tool for my thesis.
Unfortunately, a few small problems occurred.I have a video attached so you can get an idea of what I mean.
My problem is that the keyframes at the beginning of the video don't track, then track reasonably well.
What can I do about it, where and what do I have to change to get a better result?Many thanks for the help!
Here is the WeTransfer-Link with the videos of the "error":
https://we.tl/t-OFVZTfqWx9 -
Hi climate-court,
Two things become instantly apparent: you have no lens distortion profile provided, even though the lens seems relatively wide. The other point is that the parallax in the video is relatively small, more like a pan with a tiny "dolly-in".
You left the Clouds tracker in; that should always cleaned up. The rule of thumb is that anything that moves for the camera evaluation has no positive influence. Besides, only close-by tracker features are represented with plenty of pixels, and far-away features are not as usable.
I can't put my finger on it, but it feels like a little "Rolling Shutter" is in the footage. Is it perhaps that the compression-algorithm used for the screen capture to show a video on a screen creates or even doubles the effect? All in all, using a second camera to document a video problem is not a source I can rely on.
Take the camera you have used and shoot a Lensgrid in the distance of the most used features in the final footage. If the Lens grid is shot too close, "lens breathing" will create false data.
Take some measurements from the set; moving the camera around will not help much.
Set the lens to the focal length (view angle) you used, and measure the field of view. Most lenses provide a focal length that is NOT true.Rule of thumb here: if not calibrated manually (AKA Cine Lenses), the printed focal length might vary greatly: Too much to rely on!
Get the exact size of your sensor; if it is given as "1/3 inch", that is not the size; that is an old measurement of the circle (Tube) around old options and just wrong. It should be in millimeters.
If you can re-shoot, have a foot (-leader) and tail (-leader) with more parallax in the clip. This will help the central part a lot.
Use the Motion Tracker Graph View to see if you have at least a dozen green lines for each frame. Commonly, people might tell you less is enough, but those features must be 100% accurate. Without Lens-profile, that will not happen, except perhaps with Master Primes or similar optics.
Set manual trackers on the frames that do not track
You need to evaluate each tracker feature to see if it is a good tracker or perhaps a false "good" one.
Here are a few tips
https://youtu.be/bG8NxV_TWOQ?si=YvQ9SC2INvEDLIVWSince you write your thesis, I assume you always require more about your sources. Here we go:
Why do I dare to post tracking tips?
I started learning motion tracking around 25 years ago. Sadly, most tracking training is gone on the FXPHD site, where I took at least 15 courses about tracking.
Besides that, Tim Dobert's book is (2nd edition; I am waiting for a third one); as before, my best tip is to read it if you want to get deeper into it. If you have access to the three Gnomon School DVDs or streaming of Tim's content (I was in the studio in SF, CA, where the content was shot; he is the master in tracking.)
When Dr. Steve Baines started to develop the algorithm you use in Cinema 4D, I met him in 2005 in London, where he explained his approach. I hope that doesn't sound like bragging, but I want to give you confidence in my long text.All the best
-
Hello @Dr-Sassi,
Thank you very much for your quick and detailed answer!
How can I save a lens distortion profile?
Unfortunately, I don't understand your point about the parallaxes because I'm not that deep into the subject.
How do I turn off the cloud tracker? Can I also solve this with a mask?
Unfortunately the footage is final, I can re-film it and improve it...
Sensor size and other data are available, I filmed with a GoPro Hero 10, in linear mode in 4:3 format. I then made optical compensation in AE. Is that smart, or should I leave the fishey and track with it?
What can I do to prevent the individual track points from “floating”?
The problem was also that I tracked manual points, they sat perfectly, but then "floated" after the solve process.
Thank you very much again!
Best,
Georg -
Hi Georg,
The GoPro Hero 10 has a lot of in-camera stabilization in it. Which means it is not a 1-to-1 representation. (I have only an older GoPro, so my experience is not with a GoPro 10)
The procedure to create a lens profile is described here: https://help.maxon.net/c4d/2024/en-us/Default.htm#html/TOOLLENSDISTORTION.html?Highlight=lens profile.
Use only the data you get directly from the camera. No adjustments other than color.The mask is an option to exclude moving objects, but it has no option to retrieve spatial information.
Tracker points will be created for contrast areas; if it does not match all other's movements, delete it.
Parallax is the "perspective" change in the image. The opposite would be footage from a lock-off (Tripod) footage, which doesn't provide data on the space. Also, just paned (and similar movements), and zoomed footage doesn't provide parallax. (Yes, I'm aware that GoPro has no zoom options)
Points that are manually tracked should sit on non-changing features. (I mentioned that in my YouTube link.
Camera Motion Tracking creates a 3D point representation of the stable world in front of it. Once solved, that "cloud" is static.
The idea of tracker points is triangulation. Lens distortion leads to variation in the speed at which these points move. This disables the precision of a stable triangle or leads to a useless result.It is possible to track purely with manual points. However, if the footage and lens profile are not pristine, the rule that 8-12 tracker points will most likely not work.
With post-processing already in-camera processed material, you have two layers on top of problematic footage. In other words, the perfect camera/lens features are based on Global Shutter (not rolling shutter), while no stabilization is used, and the lens has minimal distortion. GoPro is not the camera for this work. Footage with no motion blur (and no added motion blur) is preferable for this work.
Since you mentioned After Effects, try to find the spatial camera path there, then merge that tracked camera in a Cineware file that allows you to get the result into Cinema 4D. (I haven't done it in a while, so this is brainstorming.
I'm sure that you went through this material, but I want to make sure you have the following:
https://www.cineversity.com/vidplaylist/motion_tracking_object_tracking_inside_cinema_4d/All the best
-
Hello @Dr-Sassi,
sorry for the delay!
The lensprofile fixed my problem, thank you so much!!!!BUT: Now I want either render my footage wit hthe correct distortion OR render my tracked objects with distortion to match it the footage. I found out about the technique to "de-distort" the image, but this only works using Standart renderer, I am using Octane. When I switch vom Standart to Octane the "Lens Distrotion"-Effekt disappears. What can I do to use my Lens profile in Octane?
Thanks
Best,
Georg -
Hi Georg,
The workflow is easy to explain but needs some work to apply.
The practical camera image contains distortion, and the idea is typically that this needs to be left untouched.
There are two options to adapt the rendering to the camera image.
The first is based on a setting in the Standard Render called Lens Distortion.
https://help.maxon.net/c4d/2024/en-us/Default.htm#html/VPPHLENSDISTORT.html
The created Lens Profile, which was used to track the footage, is applied here. This is then typically ready to be composite.
The care that was put into the Lens Profile will be shown here. Typically, only barrel and cushion distortion is mentioned, but some lenses produce a mixture of both; often, the center is different from the frame.
This is not avaialble for Redshift 3D.The second one relates to Redshift and is based on specific information from the Compositing app called ST-Map. Think of it as a UV color-based map, which can be created based on linear gradients placed (add) on top of each other.
https://help.maxon.net/r3d/cinema/en-us/Default.htm#html/Lens+Distortion.html?
For example, those maps are industry standards and easy to produce in NUKE (The Foundry).
To create your own, one can use the Standard Render. It needs to be strict in Linear workflow, and the two Gradients need every element in the Gradient to be set (color knot) as well to linear.This would lead to a source that captured the camera image. In other words, it does not address the requested "padding". If that process is applied with the non-padded Lens Profile, the information will not match.
Now, it will get a little bit more complex.Padding the Render Frame
The camera framing and the lens distortion are all in the practical footage. We would like to add padding; the lens distortion needs to be created on the Partial Footage plus the Padding. This enables us to use the padding with assumed lens distortion, as we do not know what the padding would have. This padding has to be done frugally, meaning only as much as needed.
This padding is done by increasing the resolution of the rendered image, which would do very little if the camera's field of view is not adjusted. The sure thing we know about the field of view is that it will be wider.Calculating the new field of view is a math (Trigonometry) problem defined by the new result (Camera + padding on the left, right, top, and bottom). In short, if we crop the new and larger image to the camera resolution, the effective field of view (AKA Focal Length) will match the camera footage if the ST-Map was used in the RS Camera with the larger resolution (padding). I would suggest adding the padding so the aspect ratio of the footage stays the same.
If the camera has a fixed lens value, it can be done with a short calculation and setup, but if the field of view varies, XPresso is the way to go.
The previously rendered UV/Gradient map can go into the RS camera.With this padding, the missing parts of the frame are filled.
If you would like to see that integrated into a Workflow, please request this with the "Share Your idea". I have done so pretty much every year.
https://www.maxon.net/en/support-centerAll the best