Solved

S3 repository overflow


Userlevel 2
Badge

Hi

We running an VBR11.0.1.1261 using SOBR with an QNAP as capacity tier. 

QNAP is supported S3 repository by Veeam and Veeam is partnering with QNAP. 

Also Immutable option should work as stated in https://www.veeam.com/sys469.

After running while this setup we start getting strange errors.  

  1. Error deleting multiple items from SOBR invalid URI - which refers to cleanup operations on the S3 repository base on retention policy. VBR runs this operations in regular intervals. 
  2. In Veeam One - Data Protection View we get error type [v] (never seen before) and a Knowledge information that stats: S3 object storage added to VBR as repository may face overflow with small metadata file of 0 kb size. Cause: malfunction in object delete algorithm. Resolution - contact support to get a private fix. 

To get backups to SOBR working again we to delete objects from QNAP using qnap utilities. 

Now we try to synchronise SOBR with VBR DB but it takes forever. 

Has somebody came across this situation with AWS S3 or compatible S3 and how there resolved this issue and made sure is comes back again? 

All long term archive are in the S3 capacity tier and we are afraid of loosing them. 

Kind Regards, 

Boris 

icon

Best answer by jbwoodoo 13 May 2023, 11:58

View original

5 comments

Userlevel 7
Badge +20

Morning, they’ve only performed testing on the specific hardware in that article so how close does your specification align to this? If you’ve got worse CPU/RAM or a lack of flash storage then performance will be greatly impacted. What’s your block size and amount of data within your object storage repository?

Userlevel 2
Badge

Hi MicoolPaul

Thank you very much for your reply. 

We are running a TS-1283XU-RP. CPU is an E-2124@4,3Ghz with 8 GB RAM. 

I do not doubt the test that has been carried out. And in general our setup was working since December 2022. 

As recommended by Veeam we have configured immediate copy to capacity tier ( in our case to QNAP) and we only filled the NAS up to 7 TB. Our capacity is 32TB. 

The maximum block size we could configure is 64k for the volume. 
Of course our QNAP unit has 1 dual 10Gb/s Network adapter. 

What about QNAP snapshots on this S3 volume, should I activate them?   

As recommended I upgraded to VBR 11.0.1.1261 P20230227. This should overcome the problem of incorrectly deleting objects from capacity tier. 

But I am still getting errors when running SOBRStore01 Offload

Error: DeleteMultipleObjects request failed to delete object [Veeam/Arquive/PRD01…. Invalid URI

How to overcome this errors? 

Do I need to create a new S3 store and remove the current one? 
Should I remove the objects that are run into errors? 

Kindly asking if somebody has a good suggestion.

Thanks.
Boris 
 

Userlevel 7
Badge +20

We’d need to see more logs to get an idea of what’s going on. But I would 100% get a support case open as this could be a bug.

 

If you have available space on your performance tier, I would try getting Veeam to download the backups from capacity tier back to your performance tier, it’ll enable you to create a new S3 capacity tier if support determine any corruption or issues that require the creation of a new container. It should also give you an idea of the data integrity within the capacity tier in case you’ve already lost data, but just don’t know it yet!

Userlevel 2
Badge

Hi MicoolPaul

Thank You very much for your comment on this matter.

I appreciate your approaches to overcome this S3 issue.

So I did a copy of the important Backups (PRD only) to some external disk 2x 4TB and started a new S3 store on QNAP.

It is confirmed that this behaver is a bug, but resolved in Patch P20230227 for version 11 of VBR.

I still did not moved to VBR 12.

Also has been advised that QNAP OS has to be patch to last version, as there where identified some issues with QuObjects.

During the last week I did not get any error related to SOBR or S3.

Thank You.

Boris

Userlevel 7
Badge +20

That’s fantastic news! Glad you’ve got an answer and thanks for posting what that was for future people with the same problems!

Comment