Hi Georg,
The workflow is easy to explain but needs some work to apply.
The practical camera image contains distortion, and the idea is typically that this needs to be left untouched.
There are two options to adapt the rendering to the camera image.
The first is based on a setting in the Standard Render called Lens Distortion.
https://help.maxon.net/c4d/2024/en-us/Default.htm#html/VPPHLENSDISTORT.html
The created Lens Profile, which was used to track the footage, is applied here. This is then typically ready to be composite.
The care that was put into the Lens Profile will be shown here. Typically, only barrel and cushion distortion is mentioned, but some lenses produce a mixture of both; often, the center is different from the frame.
This is not avaialble for Redshift 3D.
The second one relates to Redshift and is based on specific information from the Compositing app called ST-Map. Think of it as a UV color-based map, which can be created based on linear gradients placed (add) on top of each other.
https://help.maxon.net/r3d/cinema/en-us/Default.htm#html/Lens+Distortion.html?
For example, those maps are industry standards and easy to produce in NUKE (The Foundry).
To create your own, one can use the Standard Render. It needs to be strict in Linear workflow, and the two Gradients need every element in the Gradient to be set (color knot) as well to linear.
This would lead to a source that captured the camera image. In other words, it does not address the requested "padding". If that process is applied with the non-padded Lens Profile, the information will not match.
Now, it will get a little bit more complex.
Padding the Render Frame
The camera framing and the lens distortion are all in the practical footage. We would like to add padding; the lens distortion needs to be created on the Partial Footage plus the Padding. This enables us to use the padding with assumed lens distortion, as we do not know what the padding would have. This padding has to be done frugally, meaning only as much as needed.
This padding is done by increasing the resolution of the rendered image, which would do very little if the camera's field of view is not adjusted. The sure thing we know about the field of view is that it will be wider.
Calculating the new field of view is a math (Trigonometry) problem defined by the new result (Camera + padding on the left, right, top, and bottom). In short, if we crop the new and larger image to the camera resolution, the effective field of view (AKA Focal Length) will match the camera footage if the ST-Map was used in the RS Camera with the larger resolution (padding). I would suggest adding the padding so the aspect ratio of the footage stays the same.
If the camera has a fixed lens value, it can be done with a short calculation and setup, but if the field of view varies, XPresso is the way to go.
The previously rendered UV/Gradient map can go into the RS camera.
With this padding, the missing parts of the frame are filled.
If you would like to see that integrated into a Workflow, please request this with the "Share Your idea". I have done so pretty much every year.
https://www.maxon.net/en/support-center
All the best