Hi Georg,

There was accidentally some other text in this post (a book review note) that I had in my Grammar check, please ignore it and if you like, please delete the email. Thank you.

Thanks for the footage.
Please, no unknown cloud services. I'm not familiar with miro•com. I will not touch it.
If you could share the storyboard with Wetransfer, that would be cool. Thanks for the extra effort.

I have checked the Bahnsteig (train station) Footage.
Please have a look here:
https://stcineversityprod02.blob.core.windows.net/$web/Cineversity_Forum_Support/2023_PROJECTS_DRS/20231209_CV4_2024_drs_23_TRbs_01.zip
0547_Bahnsteig_padding.jpg

I have invested quite a few hours, and the closest I can get with it is based on keeping the tracking point all in the middle area, left to right.
As Rolling Shutter is typically a vertical time difference, the horizontal use of a small strip helps to limit the problem.

When you look at the walk area in the front, there are many compression artifacts, as mp4 also has the idea that "Dark Tones" do not matter so much. The Tracker believes it is information from the surface.
With this and all I have mentioned above, it is unwise to take a lens distortion map from any footage when so many artifacts are included, but I did it anyway to get anything done.

To say that you don't want to work with any treatment of lens distortion is just not an option. End of the story -- sorry, no discussion when a lens with distortion is used, like this one here. In the mix with rolling shutter, stabilization (best guess based on the spikes in the F-Curve), mp4 artifacts (blocks and motion)
Since I write in a forum, I share details that anyone reading along should have.
This is not the first time in my over 17 years running this forum to get a request for help after the shooting is done. This is not how it works. The blueprint phase is where the storyboard is done, and from there, all people who could help brainstorm about it. This is the cheap phase, where nothing is set in stone, and things can be designed to work well.
Yes, I get that the lens-distortion workflow is not simple, and quite frankly, while we talk along a whole pipeline, that needs some knowledge. Red Giant has a solution for many apps, but all for 2D work.

There is also not a lot of correct data about the Lens-distortion profile and Padding available. If the image is rendered larger, the camera must be adjusted so a wider field of view correctly produces the extra pixels. But, and this is super important, the Lens-distortion profile used for the un-padded image can't (!) be used for the padded version, as it would be applied to the larger image and would not work correctly for the original image area.
Padding_and_Modelmovement.jpg

In your case, the scene needs to be more significant as the needed distortion for the 3D rendering makes things smaller. The 6144-pixel wide suggestion is for the train station, as even 6K does not cover the corners completely. Those Padding workflows are (again) not simple and need care. See the image below; the red area is the Padding here. (Why always these screaming colors? So it is not accidentally overlooked during deadline stress)

Let this all sink in for a while. I believe it klicks at one point, and then it is fun to do.

Sometimes, it might be useful to set this up as camera mapping, especially with the train station, As the train running through will cover a lot and take all the attention.

One tip, if there is nothing in the scene itself that moves, move the camera much slower; the "factor of slower" is pretty much the "factor of less Rolling Shutter".

Here is a little example, since it is late, I stopped for the last few seconds the train.
https://stcineversityprod02.blob.core.windows.net/$web/Cineversity_Forum_Support/2023_Clips_DRS/Bahnsteig_2b.mp4

This was a long day, close to midnight here in L.A., and a Saturday. I call it a day.

All the best