I am soliciting feedback from the community and Vanguards in Slack about an issue we are seeing so wanted to get feedback here also.
Scenario - we have a customer using v12 with latest patches sending data to one of our DCs with Hitachi HCP Cloud Scale cluster which is very large. They are sending over 20PB of data to us.
Object Lock is turned on for some of the buckets and we are seeing billions of gets/puts/list commands which is causing havoc on the cluster worker nodes for HCPCS.
The client has configured their jobs with the default block size of 1MB and we were thinking to change that to 4MB, but it would not make sense for 106-byte files. That is the sizing of the files we are seeing the issue on for three buckets of which 2 of them are Windows servers and one is Linux.
Any thoughts or has anyone run in to this before? I am working with Veeam and Hitachi together as well but if this rings a bell for anyone your input would be great.
Thanks.