Trouble tracking footage to place 3D object in scene.
-
Like the title says, I have some footage shot on a C500ii that i'm trying to use as the base to add a 3D model. I've hit a road block where I can get a decent track but I end up with some jitters still.
Is there a way to eliminate certain keyframes within the track?
The second piece of this is that I would like to render this via Redshift. Will the solved camera work for this or is there a way to apply that tracked data to a RS Camera? I have not been successful in trying to emulate the DOF with the solved camera which is a big piece of helping this blend with the original footage.
Any help is appreciated!
-
Hi Zack E.,
Without the data, it is hard to tell what happens.
The Canon 500 MII certainly has a tiny rolling shutter, which might cause jitter sometimes, so I exclude this here.
Was there Zoom involved? That is not a simple task, then.A good track has enough parallax, even if the shot does provide it; a few seconds before and after typically helps if the camera is moved more than during the shot.
You need a "Lens Distortion Workflow" to track undistorted footage. Distortion leads to a different speed as features move in the image.
Tracker needs to be cleaned up if problems show. There are many options that the tracking app would call something a feature that is not one.
https://www.youtube.com/watch?v=bG8NxV_TWOQ
The closer a tracking feature is, the more precise it is, far away features occupy fewer pixels and should be avoided.The focal length and the sensor size (what was used, not the full size in most cases) are crucial. Sometimes, lenses are marked for the crop sensor size and altogether provide a wrong focal length. Suppose possible measure that with the settings of the camera. Please note that still lenses breathe while focusing, and the lens grid might provide incorrect data.
The problem with Redshift 3D and the Camera Tracking in Cinema 4D is that it uses two incompatible lens distortion workflows. If possible undistorted the footage and go with this material. Yes, usually, the camera footage is the one that should not be touched.
DOF is easy; the Bokeh is the problem. Often these terms are used interchangeably, but DOF is an ideal point in focus, and the distance in which it is out of focus gradually. Bokeh is the quality of the blurriness, which depends on the lens (and filtration) and how much of a match is possible. The twirly Bokeh of some lenses can't be matched. Others, even Apodization, can be simulated with an image in the Camera/Optical.
If you have a barrel distortion, you might need to do some math to get some padding around the render if it is distorted to match the original footage.
If you can share a clip where the most jitter is while having enough parallax to get a track, please do so. Via Dropbox, Google, Apple, Adobe, or Wetransfer. (I don't open URLS that I don't know as download options. Including shortened URLs) Thanks a lot.
Please share as well all data you can.Cheers
-
Thanks for the response. A lot of this sounds familiar, coming from a camera background, but I have a few follow ups:
-
I think I'm still a bit lost on the "Lens Distortion Workflow" differences between Camera Tracker and Redshift. Let's say I am able to get a clean C4D track (or at least tidy up a jittery track)... is there a way to take those key frames and apply them to a Redshift Camera? The ability to match lens filtration (we were using a promist to get a little bloom) as well as really dial in the focus offset and DOF (mostly to get some fall off on the back of the 3D object) is the reason I'd like to finish my render that way.
-
At this point I managed to loosely recreate the shot with simple key frames to a Redshift Camera. The only reason I have this luxury is because it is a mo-co dolly out, so really it's just two keyframes along the z-axis. It is not EXACT and I've had to do a little bumping when compositing the 3D render in AE, but it's pretty darn close. I've basically been using the footage as a background object and dialing in my ease of keyframes on either end to match the movement. Is there a better way of doing this?
-
Lastly, I'd love to send you a project file but this is a project for unreleased product, so I can't send a link in this forum. Is there a chat function where I can send?
Thanks again!
-Zack -
-
Thanks for the reply, Zack.
A two-point Motion controlled shot, with some information from the Motion control software, could also be done with The Cinema 4 D Camera Calibrator. In and out Camera calibrated and then animated between the two cameras. The key is to know the speed between the two points; perhaps some data from the dolly can help if it is software controlled. Otherwise, a few more Camera Calibrations setups might help to define that speed curve.
The Lens Distortion Profile created for the Tracker is a collection of Numbers.
https://help.maxon.net/c4d/en-us/#html/TOOLLENSDISTORTION.html?TocPath=Render%2520Menu%257CLens%2520Distortion%257C_____0
Based on the effort one puts into it, it can be quite accurate. There are two general distortions, pin-cushion, and barrel. The real world is a bit more complicated. Glasses in all shapes are in a lens for various purposes. This results in complex mixtures, like a "mustache distortion", besides other extremes. The key is to have enough lines to provide enough information; often, the needed amount is also based on analyzing the lens grid by eye.At the moment, you have a profile for a specific lens (not the model, the lens itself (same serial number); the camera tracker can estimate where tracker points are instead of tracker markers being pushed by distortion, which leads to false results. With no distortion, the calculation can be done. The more parallax in the foreground, the better.
Now you have a Standard Camera (Cinema 4D) and need in 2023 an RS Camera to get to work. First of all, save the project, and make a copy of the Camera. Use the Convert to RS Camera. All Keyframes are currently gone. If the focal length is not animated, you can make that Camera a child of the Motion Capture camera, set position, rotation, and scale to zero or scale to one. (Reset PSR). You have both worlds now, and pick which Camera you need. For any animated data that wants to survive, focus point, or field of view, use XPresso.
Ignore the Sensor preset, and place your own data into it. Based on the data you get from the manual or from your metadata.Cinema 4D renders this without distortion as well. The native result is based on math, not real glass. If one would now compose the renderings over the original footage, the two images would not match. Based on the distortion, it might become much "life" between the two "layers".
The key in any production is typically to leave the footage alone. If the footage is undistorted, each pixel will mix partially with all neighbor pixels, and the result will have less quality. Perhaps you come from 6K; then you go down to 4K to overcome the weakness of a Bayern pattern. In that step, an un-distortion step could be thinkable, but never after that.
I have personally not heard of a step done like this, but with 12K now affordable, things might change.AnywayThe 3D renderings are not Bayern Pattern based, so they are often sharper, and the un-distortion helps to tame that a little bit. Based on that, the renderings adapted to the footage and not the other way around.
Now we have in Redshift 3D an option to place a UV (ST-Map) image into the rendering workflow that helps to get what is needed. However, the Lens distortion above is number-based, not image (UV). The question is now, what application do you use to composite it?
Red Giant
https://help.maxon.net/rg/en-us/#html/vfx-lens-distortion-matcher/lens-distortion-matcher.html?TocPath=VFX%2520Suite%257CVFX%2520Lens%2520Distortion%2520Matcher%257C_____0
Intro Video
https://www.youtube.com/watch?v=tht_2DJA_9cNuke: (St Map)
https://learn.foundry.com/nuke/content/comp_environment/lens_distortion/adding_removing_lens_distortion.html
A similar workflow is given with Mocha from BorisThe Pro-Mist (Tiffen) Diffusion can be simulated to a certain degree with Red Giant Magic Bullet Looks:
https://help.maxon.net/rg/en-us/#html/magic-bullet-looks/camera-tools.html?TocPath=Magic%2520Bullet%2520Suite%257CMagic%2520Bullet%2520Looks%257CWorking%2520With%2520Tools%257C_____4
I have recorded over one hundred of my Tiffen filter, including the complete set of Pro Mist (and Black Pro-Mist), etc. I shot with a linear LED (CIR 96) light, all filtered in a dark room, covered in Duvetyne Black while recording a Macbeth chart. Red Giant then turned all of the images into data. You might adjust the results to your footage, of course, as with different lenses and sensor sizes, the result varies. As a side note, the Pro-Mist filter will pick up the environment light, like lots of green, if shot in a forest. Perhaps adjust your results based on a gray card accordingly if no matte box was used to prevent the tint.I can share the email address since we have no chat room here anymore, based on my wish. (This is a forum, and any 1on1 conversation is useless for anyone visiting here. So, I try to stay as much as possible out of 1on1 communication channels. I like to keep everyone on board. However, I will keep your shots like being under NDA. Access to this email box have only Maxon Trainer. Put my name in the subject line so I see it.
BTW: I thought this might be of interest to you
https://www.cineversity.com/vidplaylist/red_giant_master_playlist/getting-started-with-vfx-supercompAll the best
-
@Dr-Sassi thanks for the detail and all the helpful links!
I'm taking a look at the lens distortion as we speak, now that I have a very basic recreation of the camera move with my RS Camera. I think I'm still a little green here in shooting for tracking, but I hope that what I captured is good enough to work with.
I sent you a project file via dropbox to the email you provided. Hope to hear back from you soon!
Thanks.
-Zack -
Thanks for the files, Zack.
After one+ hour, it tells me it needs three more hours. I did not expect 15.6 GB; typically, proxies (jpg-sequence/mp4) are just fine.
I'm not connected to a company web service. Since I have to do other things, that time will be extended.
Not sure if I can have a look at it today.Cheers
-
Hi Zack,
As usual, I'm happy to have files, and there is so much more information in them. I have sent it back to you, but just the tiny C4D file, not the footage or textures. I have motion Tracked it newly.
The sensor used for the footage is 38.1 x 20.1. The camera in Cinema 4D was set to 36 x24. I found that the camera was set to 50 mm. With the only object sitting in the center and only a dolly-in, I would perhaps try to get it done without lens distortion workflow.
So far, I can tell the lens used at least a Sigma Art 50 or a Canon 50mm or something similar/better; I believe the lens has little distortion.The DOF is quite prominent; since I have no data on the container size, the F/stop I have set is (consequently) way off. If you like to match it with the practical setup, perhaps place everything, and a null and (best guess) set the scale to 0.05.
Please have a look at the Magic Bullet Looks setting. I have set up a Tiffen Pro Mist. Since the size of the front element and its FOV plays a role in the selection of the strength, my best guess might be off, 1/4 or 1/2, I would say. But if a 1/8 strength was used, no surprise. Anyway, my suggestion is to place it in the post, don't bake it in the render.
The render has the background, but that is more for preview. Compose that later in and use a Puzzle matte for the container.
I do not see a need here for GI, but that is up to you. BTW, with that amount of frames I would stick with 24fps and images sequences, and conform it later.
My best wishes for your project
-
Thank you so much! You were spot on with the lens (a CN-E 50mm). After taking a look through and relinking the texture i just have a few follow up questions:
- Is the cause of my track not being reliable because of that sensor size difference? I'd love to recreate the track you got on my own so I can understand exactly where I messed up.
- You mention applying the Magic Bullet Pro Mist in post. Is there an option to render out this filtration look as it's own render? I'm pretty new to Magic Bullet, so I'm not sure the correct workflow for using it.
Thank you so much for all the help on this project. I feel like I'm learning a lot, but it's great to have a reliable source for any roadblock I hit.
Best.
-Zack -
Hi Zack,
Thanks for the feedback on the lens. I have collected lenses for many decades, and the variety of results with each is like "painting with lenses". This is certainly an excellent setup for that camera.
I have been using motion tracking for over two decades, and my idea about it is certainly not shared all the time. However, when one's work is on the big silver screen during festivals, any tiny little slip can be seen. My work was so far consistently stable. I assume, since HD is no longer the standard, we have to get this kind of precision, no matter what. Based on that, I work in the way I do. Over many years, my favorite tracker was certainly Syntheyes. But I'm OK these days, most of the time, with Cinema 4D's tracker. But it needs some knowledge and an eye for this. That is not an overnight skill. Take your time and be patient with yourself.
I know the developer of the tracking will tell me now that I do more than needed. But that is my way of getting stable tracking if enough data is in the image:
I use a lens distortion grid. (yes, sometimes that is not urgently needed, but to have one is nice)
Knowing the focal length (never trust the print on the lens, measure it if great precision is needed.) Also, here, the Cinema 4D tracker is relatively forgiving.My main tip would be on the first frame to create Auto-Tracks, then track automatically.
Go to the end of the clip, then again Create Auto-trackers, then track.
Go to the middle of the clip, repeat (Create, Track)
If the clip is long, I do it on a quarter and 3/4 again. Perhaps among all of these points again.
That should typically fill the whole duration.Then I move the time slider back and fore, looking for features (trackers) that do not agree with "their" surface", and I delete those. Also, the developer might tell you that the app will figure this out. If mistakes are gone, no harm is done.
This is the point where I impatiently check if a 3D solve can be created.
If I'm not happy, I go to the analysis part of the work: Motion Tracker Graph View. If there are not a least a dozen entries for each frame, I add more trackers. Also, here, some might tell you eight. I don't think that is enough, or they are all manually done and spot on.If all is fine, I will try another solve.
Then use all the Tags that can help orient the scene (Planar, Vector, Position), which helps the explore the solution better. Here the point cloud is a rough representation of the scene and how the tracker saw it. Does that makes sense in 3D, then you are good to go.
The Magic Bullet Looks option is available for many platforms, in Cinema 4D, AfterEffects, Premiere, DaVinci, etc. It is not an effect that can be created as a layer or image and composite later on that needs to be done during compositing (or if no compositing happens) in Post-render.
It should be applied last to the CG element, as by then, this element has perhaps some Light wrap on it, which needs to be there before the Pro-Mist is applied. Since the background has filtration backed in, it should not get another dose. Except when all is comped, then on top of all, if needed.All the best
-
Your detailed explanations are so helpful, I'm truly grateful.
I think the biggest thing I was missing (aside from the sensor size being off) was not taking multiple passes at the auto track. I was unaware that was even an option from the tutorials I had been following, but it makes a ton of sense. I think going through and deleting tracks is the next piece that I need to get better at understanding.
I appreciate all of the help you've given me throughout this project and will post the final result in this thread once it is client approved and live. For now, the first review went great!
Best.
-Zack -
Thanks a lot, Zack, for the feedback.
Yes, that is more my take on it. Sometimes I even increase the number of Tracks (Features, as others would call it).
I have learned this from many courses I took over the years for Match Mover, Boujou, PF Track, and of course, Syntheyes, and I transferred this knowledge to the Tracker in Cinema 4D. I also had the pleasure to know the developer of it, and he explained his project back in 2005 in London to us during a PXC group meeting. So I can't take full credit for it.
My best wishes