Skip to main content

We currently have Veeam for M365 v7 running in Community Edition mode since we are not using the product anymore. However, we have plenty of data from 2020 to 2022 that was backed up to a S3 Standard bucket. We rarely (read, twice a year) need to query or restore anything from it.

Our Veeam rep is happy to provide a temporary license to reactivate the product for any job requirements, so the question would be - what is the best option to move this data from S3 Standard to a different tier, and what are the implications that come with this? My thought process was the following:

  • Add a new S3 repo that is on a Glacier Instant Retrieval / Glacier Deep Archive tier bucket
  • Create a backup copy job to copy the current data in the S3 Standard bucket to the newly created bucket above
  • Delete the original backup / S3 Standard bucket holding the original data

Is this a supported / reasonable strategy? I’m assuming that the Archiver appliance would help on saving in API/data transfer costs here?

Much appreciated.

The problem with that process is that the backup copy job will only copy the current restore point. When you do a retrieval and restore, you won’t be able to pick a restore point (eg from 2021). It will only explore the data as it was at the time of your backup copy.

A “reasonable” but not “supported” method would be to move all your data to Glacier and then retrieve it when you need to do a restore. This would retain all your restore points at a lower cost. You would have to maintain the server and repositories so that the metadata is intact.


The problem with that process is that the backup copy job will only copy the current restore point. When you do a retrieval and restore, you won’t be able to pick a restore point (eg from 2021). It will only explore the data as it was at the time of your backup copy.

A “reasonable” but not “supported” method would be to move all your data to Glacier and then retrieve it when you need to do a restore. This would retain all your restore points at a lower cost. You would have to maintain the server and repositories so that the metadata is intact.

Thanks HangTen, appreciate your feedback. “Reasonable” works for me here, maintaining the server shouldn’t be a problem, however - do you mean performing a bucket copy task or setting up lifecycle rules? My understanding is that Veeam doesn't like AWS managing storage tiers (which aligns with what you have stated with the “not supported” comment as well).

Also, given we’re talking about approx. 280 TB, I’m afraid LIST/COPY/PUT costs would be prohibitive for us if it means moving data back and forth between tiers even if restore operations won’t happen very often.

Edit - maybe not, API call costs are actually quite reasonable, but Glacier retrieval costs are what really stings, is there any other tier where retrieval costs aren't as costly?]


Comment