Skip to main content
Solved

Copying on-prem data to Azure Archive tier


Hi, we have TBs of data in our on-prem Veeam repositories that we would like to move to Azure (archive tier). We added object storage repo and choose Archive Tier but when we try to use copy /move data option to migrate off this data from our on-prem repos to this archive repo, this doesn't show up in the available target locations. We would like to get some advice on some of the available options that we can use to move our data to azure. We will hardly need this data to restore from but as a business requirement, we need to keep this for a few years. Thanks

10 comments

Userlevel 7
Badge +22

Hi,

 

Data that is going to Azure archive tier must exist within Azure storage first. In your scenario you need to ensure your on-prem repositories are within a Scale Out Backup Repository (SOBR), as a Performance tier, then add an azure storage account as a capacity tier extent, then azure archive as an archive tier

Userlevel 7
Badge +19

Hi @gabbas -

Do you meet the limitations of Archive movement? For example, if you move from either Perf or Capacity Tier, the source storage must be Azure. For more info, see Limitations from the Guide:

https://helpcenter.veeam.com/docs/backup/vsphere/limitations_archive_tier.html?ver=120

Userlevel 4

Hi,

 

Data that is going to Azure archive tier must exist within Azure storage first. In your scenario you need to ensure your on-prem repositories are within a Scale Out Backup Repository (SOBR), as a Performance tier, then add an azure storage account as a capacity tier extent, then azure archive as an archive tier

Thanks Michael, this means we will need to add our on-prem veeam repos as performance tier and then we can choose the one in Azure as capacity tier. We have some old data and we removed teh backup jobs some time ago, this is now under Disk (Orphaned). Can we not use copy or move option by selecting this data and move it to Azure blob storage and then change this migrated data from cold to archive tier:

 

Userlevel 7
Badge +22

Another way you could do it is create a SOBR with Azure storage as a performance tier and then azure archive as an archive tier, and then use backup copies to get that old data into archive tier via the performance tier and a very aggressive archive tier offload configuration

Userlevel 3
Badge +1

i'm also looking to do this. I have a SOBR with 3 Windows ReFS extents that need to be phased out to Azure to keep an archive for a while. Its exclusively copy jobs that land on the SOBR currently, but the REPO is alo not used anymore (no new data is ingested).

I'm torn between “backup moves” and thereby creating a new Repo with Azure as a performance extent (and archive extent) -or- somehow add Azure as a capacity extent to the existing SOBR with the Windows extents currently present and offload the data in that way.

Its 500TB of data.

End goal needs to be the complete decommisioning of the three Windows Extents and underlying hardware.

Anyone got some pointers to make a good decision? I'm leaning a bit more towards the “Move backups” with an new repository. Or perhaps even extra copy jobs to copy the data to a new repository.

I've read an important limitation regarding the backup move functionality: it can't be throttled using traffic rules it seems. That would be a plus for copy jobs i think.

 

Userlevel 3
Badge +1

keep it simple approach would probably be to add the capacity & archive extents and let SOBR move as much as possible to capacity & archive tiers. After that, just put the performance extents in maintenance and (physcally) bring them down & decommission them. Restores will be done directly from capacity tier  in that case. Performance extents will just sit there, in maintenance mode, till retention expires. No new data will be ingested in this SOBR, so that's ok.

Something like that. But, my OCD will kick badly in seeing those unavailable repositories with performance extents for years to come.

Userlevel 3
Badge +1

oh boy, i had alot of “fun” figuring out how to best do all this offloading to azure in a situation where all jobs (copy jobs and backup jobs) have been deleted/disabled. In this case, all is being phased out, but data needs to be kept somehow.

I had issues with legacy copy jobs that would not upgrade the chain because of absent primary jobs. Weird OIB missing messages. So i could not do the “copy backup” for those copy jobs to copy them to Azure.

The offloading to the capacity tier raised questions about the fact that the offloading would probably run only once (putting everything in the capacity tier) and will never see new data. Letting background retention do it's thing in that case seems like a challenge to manage. 

moving data to the archive tier from the capacity tier only moves GFS backups. So quite alot is left behind on the capacity tier, increasing costs. It's difficult to manage this when the evinronment is kind of frozen in regards to no running job sessions, not kicking off rentention for basic retention etc etc.

not looking to good as to getting azure to be a good fit for offloading in this case.

Userlevel 7
Badge +19

Those are good points @JayST . Maybe would be beneficial to raise feature request and/or suggestion over on the Forums for PMs to give their take on it?

Userlevel 3
Badge +1

@coolsport00 yeah i think i'll take it up to the forums as well. Let's see. However, i do think i got a bit of a corner case perhaps. 

Userlevel 7
Badge +19

Understood. 😊

Comment