Fixing a tracker camera movement
-
Hello
I've got a motion tracker solve that keeps making a mistake every time I run it, where the camera seems to want to do a movement in the Y axis when there is none in the captured footage.
You can see this here in the side view
https://capture.dropbox.com/8yjh2s2165ADc0P3The footage is just a zoom forward with a slight pan, but nothing else. Is there anything I can do in the tracker settings to "guide" it to avoid making this movement?
This is the image sequence I am using :
https://www.dropbox.com/s/cihr9ioojjkqeuv/WashShop_1035.jpg?dl=0Thanks for any thoughts on how to improve this track!
-
Hi AlexC,
Is there any option to get the image sequence?
Zooming with a little pan (left to right or vice versa) while the camera stays on its point-of-view will not deliver anything a Motion Tracker can use.
Motion Tracking is based on the spatial change of the camera. Zooming and Panning (or rolling and tilting) have no change in the "Perspective Drawing" of the image. With this, only the pan could deliver something; typically, the point cloud looks like a partial sphere. But with Zooming, this information is not given, as the points building the sphere change, like scaling.
If I get the shot correctly, manual tracking might be needed.
All the best
-
Hey Sassi
Thanks for that clarification. I've done ok on some pure zoom shots, but yes, this one has a bit of a pan too...
Here is the image sequence, in case you think anything can be done to help it?
https://www.dropbox.com/sh/fgsj6p6mu6up833/AABe-lqXu1qNrP8la4b_sIJUa?dl=0
Thanks! -
Hi Alex,
Thanks for the footage!
Here is the best I can get from the data I have. There is no sliding or micro-shift.
https://stcineversityprod02.blob.core.windows.net/$web/Cineversity_Forum_Support/2023_PROJECTS_DRS/20230409_CV4_2023_drs_23_MCcm_11.c4d.zip
Additionally, but not really precise with this footage:
Typically I would do first a Camera Calibration. For that I needed some "Set-Survey", which Google offers:
https://www.google.com/maps/@50.8517171,4.3464252,3a,60y,120.81h,58.89t/data=!3m6!1e1!3m4!1shPRwkBz5XwHdGgzwmhN7kQ!2e0!7i16384!8i8192
exif:GPSLatitude50,51.072N</exif:GPSLatitude>
exif:GPSLongitude4,0,0E</exif:GPSLongitude>
Any other camera data was stripped from the exif, so I have only a wild guess that it was mobile phone, but which one…
I was not clear if the space behind the two Iron poles were parallel. This helped. The Inside of the Maximousse seems not perpedictuar to the sidewalk.The Camera Calibration allows to position the camera in space, if done well.
Cheers
-
Hey Sassi!
That's amazing! So what did you do? How does the google maps data help you with this?
This was done with a Google Pixel 7.Do you calibrate the camera first and then track?
Thanks for any clarification, this is awesome!
-
You're very welcome, AlexC.,
First, I explored the tracking and excluded some options to see if that helped. No Luck.
Then I used the Camera Calibrator, but I always got offsets in the Camera. Even I used only outside of inside the building data. I used on set as a starting point and manually found the "key-positions. Well, after an hour, I gave up on that concept. I think it never really matched - the material was stabilized, not raw. I Stopped checking at that point; if I could find Rolling Shutter, nothing directly pointed me to that.The camera calibration works with parallel and 90º relationships. Nothing else otherwise. So I had to be sure what lines I use and what quality those have. Typically that would be delivered as Set Survey. A sketch with some measurements distance of the Camera to an object in the scene. Size of objects and roughly how the set looked like. So, Google Maps helped me explore that and get more certain features instead of questionable ones.
I checked the options in Syntheyes and got nothing automatic. To set another hour using the manual/guided route was not my target.
So, I went to the following options: Zooming and panning is pretty much a 2D approach; as mentioned, there is no parallax, which means only the Camera Calibrator might be able to get from one image a camera position.
I loaded it into the current version of Ae and started the tracking with Zoom enabled. While running a Cineware layer, I could "Merge" the Camera into the C4D file and check it.
With the two "squares" that I had retrieved from Camera Calibration, I could get some (not perfect!) spatial relations into the scene. But it is stable and does no sliding.In short, I would try to move the Camera before and after a lot and slowly while avoiding any automatic calculations. Shooting a Lens grid, a wall with square tiles will do, and locking the Camera on one lens only—no switching of lens nor cropping.
The information about which phone was used is crucial as it allows us to gather information about it. Shooting in the highest resolution and doing post-production zooms instead of a camera-based one will help a lot, as you can composite the parts into the full res and then control the zoom nicely.
This is why people now use 8K or even 12K cameras.My best wishes for your project!
-
Hello Sassi
I can't believe how much effort you put into this! Thank you so much, that's deeply appreciated and humbling.Thank you for the explanation, and for the tips.! And deep gratitude for your time!
-
You're very welcome, AlexC.
I love to look into any of those cases. It also helps me to see what we can do right now and not rely on stuff that worked "yesterday". (Like answering from memory…)
With every question here, I feel to have a win for myself as well. So, thank you for the questions.
Enjoy your project.