One thing that influences this is:
The scene is set up with no light, so it is frontal illumination. Place a sphere with the white material, and the center will differ from the (tangential) borders. If the light wouldn't work that way, it will look like a 2D flat-colored image.
The central part that will alter it is based on ACES and our full dynamic use of data inside the apps.
Redshift 3D has no option to work in a 0 to 1.0 space, as it aims to be photorealistic, which requires allowing values above 1.0.
To solve this problem, I must say that there is a common misconception that 100% of all sRGB values are white. That is the point where all channels are clipped, and that results in white. I think any sRGB/integer material should be avoided. I hear you saying, but the example has no saturation at all, so let's talk only about white.
If you take a white piece of paper and put a light on it, it will never reflect all the light back. Now think of a glossy porcelain cup, exposed so the white material is 100%; what will happen to the reflections from the bright window? As the white is already 100%, it will be clipped. So, what is the white you need in your image?
If you want to do tests with this method, skip the reflective approach, set a surface that produces 100% Emmision, and do nothing else. So you can focus on that value along your pipeline.
White is not defined anywhere; if you search in color science books, it is not simple to explain or pinpoint. Which color temperature has it, and what value is considered white, the specular highlight, or the reflection of a super bright studio lightbox?
The main idea is to set an idea about Diffuse white based on the exposure we get when setting up a scene with a Gray Card. From there, we can say that the light used for the gray card will produce a "Diffuse White" value when swapped out with a white card (dull and not reflective). We often have cards with a black, gray, and white field. Whereby black is typically not zero!
We leave space for super Whites and Specular values to have a Diffuse White.
There is more, but I can't squeeze into this post a whole book. For example, how humans turn frequencies into color and images etc.
With that said, let's look at your settings. Thanks for the file!
In your settings, you have the Project settings to sRGB, and the file is a PNG.
The ACES is in the Redshift 3D Render Settings. My tip is to toss that and set ACES in the Project settings.
Cinema 4D and Redshift 3D are working in float and, with this, handle High dynamic range content.
With PNG, that is pretty much all gone, and you get the data squeezed into a tiny container. Here, your trouble starts.
We always need to check if that is a value in the images stored or something produced on your screen.
That would be the case for both with PNG and no HDR display.
To work better, ACES is built as a Scene referred pipeline, meaning that content is no longer changed completely to fit into a small screen "space" (Gamut and Dynamic). The Display has become a branched-out signal, but that signal is no longer the main pipeline—just an observer. The main data is fully available until saved into a small format.
In this case, a Tone-mapping is applied to avoid clipping values above 1.0. To Tonemap a larger dynamic into the small space of a PNG 8bit/channel, some space must be available for values above 1.0 or 100%. With ACES 1.0 SDR Video, that white you like to have as 100% needs to be 1630% in the scene, as that will get squeezed down to 100%. The white material in your scene must face the light perpendicular to reflect most of it, but it doesn't. The white is then tone-mapped to something that looks grayish.
You can set the Compensate for Viewtransform to off and the View to Unone mapped in the Redshift Settings. This should give you a while that is illuminated to give you 100%.
Here is one part of the problem: logo colors do not survive the export to SDR very well. Again, one, but not all. But that goes deeper into Colorscience than I like to discuss it here. I work on something that hopefully clears all the false data in so many sources, missing all components that work on this professional pipeline.
In short and with the limitation of an example, ACES works like Raw formats in cameras, and producing 8bit/channel PNG or JPGs are working as such in the same way; the camera squeezes as much data into it, hence the shifts in it, and why no working photographer enjoys jpg as format during the pipeline.
You have either a cut out from an HDR 0-1.0, and anything above is clipped and appears white while it had perhaps color in it before, or you tone map and get a more naturalistic representation of our perception. Both at the same time is not possible.
A word about ACES: it is not a look, nor tries it to be one. The Filmic appearance comes from that limited export, but we are long past an only SDR workflow, and most people (AFAIK) already have an HDR TV or might think of buying one. Producing in SDR for this target group will look dull at one point, as I have pointed out over ten years now in the Cineversity Forum.
I hope that helps a little bit.
If not, please have a look here:
All the best