Hi PSCGI,
Thanks for the files.
Here are three trackings back; the footage is not included.
https://stcineversityprod02.blob.core.windows.net/$web/Cineversity_Forum_Support/2024_PROJECTS_DRS/20240806_CV4_2024_drs_24_TR01-03.zip
First, without a lens grid, there is no precise tracking, except your lens has no distortion. Which is rather rare, as there is always something. Shooting a grid close and filming at a greater distance makes no sense; if the lens "Breathes" and the distortion changes, the points move differently and faster in different places. Sometimes, that is not a given, especially with Drones (shot three)
If there is no set survey, then you have even more guesses. A set survey is a measurement of things, preferably XYZ-oriented.
Your footage did not show an obvious rolling shutter.
I had no measured focus length 9What is printed on a lens is an orientation, except it was measured, and the lens (again) is not breathing. Zoom lenses are an even more difficult theme.
Far-away tracker markers are often useless. Tracker markers with leaves and wind are useless and sometimes kill the precision.
Light change bases, reflection, and other trackers using moving things must all be deleted.
Manual tracking means frame by frame. Each! With 1500+ frames in one clip along, good luck with that test of your patience.
Is seven enough if the footage is 100%, the lens has no distortion or is measured, there is no Motion blur, and the seven are starting to finish stable, perhaps? Please don't count on such perfect situations. I have tracked since the late '90s and have not gotten a shot in that quality. (Match Mover, BouJou, Syntheyes, and others like the one in After Effects and Nuke or Fusion are the others that I use occasionally. None of them have ever given me a one-click and ready-to-use result.
The mesh thingy is an idea you can find, for example, in Syntheyes, for at least over a decade. It assumes that a cluster of tracking points occupy the same object, with a solid shape, similar to the Coalesce tracker to a cloud, Which is used, for example, in Mocha (Planar Tracker) or DaVinci (Some might argue with me here to simplify, and I would agree.
The switch between 2D Tracks might bring back the visuals on screen.
It would help if you inspected the Graph Editor of the Motion Tracker.
I did not find them difficult to track, even though I had zero data applied to it, which is normally a big no to start. Setting survey and camera data is a must.
Reflections (and again (trees) need masks or cleaned up later. I would trust the algorithm to see it. Ocean and moving clouds, the same idea, can destroy the solution.
Tracking features that are tangential to the camera are less preferable. There are many more scenarios, but that is often specific to a shot. I would not finish this post even if I tried to share it.
The tracker tries to find where the tracking features are; the solution freezes them in space. Any Vector, Position, or Planar allows to rotate, scale, and position it; that feature cloud keeps the relation.
Once the cloud is defined, each Null sits where the tracker was; while the best tracker defines the cloud position, many are not where they were in reality; there is a tolerance.
You are using different trackers for the planar and getting different results that are easy to understand if you look at the side view of the feature cloud.
Screenshot 2024-08-06 at 2.39.58 PM.jpg
Tim Dobbert has written a book about it, and I can't repeat all of that; from my point of view, that is a must-read.
My best wishes