Data copies from the Linux hardened repo to S3 On Prem lblows up spacew up space consumption on object storage without a clue why.
In this case, the disk consumption on the Linux Repo is about 900TB (after compression & dedup) on XFS. Copying the same data to Object storage scale-out-repo (using 4MB block size as recommend in best-practices guide), object storage size is currently 1.66PB of data and the cluster is almost full (86%)
As the disk space was calculated using the VEEAM sizing tool, we calculated with about 1.1PB on object storage with additional space to grow.
And the second thing is, that the S3 object deletions object storage receives from VEEAM are much less than they should be, but the immutability time (+ 3 week) has already reached, so there must be much more objects deleted than what is actually happening.