Solved

Best route to go for a 65TB+ backup


Userlevel 7
Badge +3

Hi, everyone!

 

We have a single VM that is over 65TB in size, that we are backing up. We have run into a snag. VMware maxes out at 65TB, so we are running out of space on this backup and we don’t know what route to go in terms of setup.

 

Is SOBR the preferred backup infrastructure for a VM of this size?

icon

Best answer by MicoolPaul 27 April 2022, 18:18

View original

19 comments

Userlevel 7
Badge +20

Hi, even if you’re using per VM backups, the backup must sit entirely in one repo so SOBR won’t help there. But if your problem is the datastore running out of space due to snapshots, you could configure a different VMware datastore to host your snapshot for the duration of the job processing.

 

does that help? 🙂

Userlevel 7
Badge +3

Hi, even if you’re using per VM backups, the backup must sit entirely in one repo so SOBR won’t help there. But if your problem is the datastore running out of space due to snapshots, you could configure a different VMware datastore to host your snapshot for the duration of the job processing.

 

does that help? 🙂

If I am interpreting the email from my boss, correctly, we are just outright running out of space on the datastore and we can’t make a datastore bigger than 65TB in VMware.

Userlevel 7
Badge +20

@bp4JC check out this article: https://kb.vmware.com/s/article/1002929

 

It’s designed for your use case. You can redirect the snapshots to a different datastore (such as a dedicated snapshot datastore) within vSphere.

Userlevel 7
Badge +7

 

@MicoolPaul  solution is a good way out of the problem.
To avoid such a massive snpashot, you could use a Veeam agent if O.S. is supported.

Userlevel 7
Badge +20

 

@MicoolPaul  solution is a good way out of the problem.
To avoid such a massive snpashot, you could use a Veeam agent if O.S. is supported.

Great shout! Depending on data change, the backup cache option on the agent could cause problems here, though assuming this is a traditional server and the VBR instance isn’t remote, this would be likely unnecessary to enable anyway. More details:

https://helpcenter.veeam.com/docs/agentforwindows/userguide/backup_job_cache.html?ver=50

https://helpcenter.veeam.com/docs/agentforwindows/userguide/backup_cache.html?ver=50

Hi there, 
@MicoolPaul is right about the Snapshot datastore,
 

The vm is huge! but, maybe you can move the virtual disks to different datastores, so you can have multiple datastores presented to the hosts, and place the disks to fit everything in place.

if you have shared storage, vmotion shoud work fine, 
I cant imagine if you need to restore from scratch such a massive vm! omg!

Userlevel 7
Badge +20

Hi there, 
@MicoolPaul is right about the Snapshot datastore,
 

The vm is huge! but, maybe you can move the virtual disks to different datastores, so you can have multiple datastores presented to the hosts, and place the disks to fit everything in place.

if you have shared storage, vmotion shoud work fine, 
I cant imagine if you need to restore from scratch such a massive vm! omg!

An alternative would be, and I hate that I’m saying it… VVols, you’d still hit the same maximum disk size, but the datastore could be larger, depending on vendor implementation (which is why I dislike VVols, it’s a loose standard it feels like).

Userlevel 7
Badge +3

This particular VM is tied to an imaging server, which is why it’s so large.

Userlevel 7
Badge +3

Another update for you all. More specifically, this server is over 3 different datastores. We have the Veeam repository on a VMDK and the max size is 65TB. We need to be able to potentially split this backup over multiple repositories, or something similar to that. That being the case, would SOBR be the way to go?

Userlevel 7
Badge +20

Thanks for the extra detail @bp4JC.

 

Aside from the obvious point of, I really wouldn’t recommend Veeam having a repo on a VMDK…

You could create a backup job per VMDK, as multiple jobs, that would work. You’d just use an exclusion to process the VM but just one disk for example.

 

Depending on how many disks and their size you could maybe set a job that processes all disks apart from your huge VMDK, then a separate job that only processes that disk. As you’ve said it’s an imaging server, so it’s not like you need application aware processing for MSSQL recovery or anything else specifically that requires a different solution.

 

Is your VMDK being backed by a SAN? If so can you just present an iSCSI LUN directly to the Veeam Repo instead?

I assume that you are presenting the storage to the hosts that play the veeam vm the storage over SAN, iSCSI , NFS or whatever… right? isn't local disk on the host.
Why don't you present that storage, as CIFS for example, as repositories, and then create a SOBR with all of them?
depending on the Max capacity of the storage you are using to present the repos, you will have to size them to be compatible, for example, 5 CIFS shares of 20TB each. or NFS repos of 20TB each.
hope this helps.

Userlevel 7
Badge +17

Distribute the VM over several datastores is probably a good idea.

In a rather big environment you should think about a VBR server outside of the vSphere cluster with a physical independent storage. I know, this is easy to say and hard to accomplish if you don't have the equipment. But as a vision for the future, I would go in this direction.

Userlevel 7
Badge +3

Thanks for the extra detail @bp4JC.

 

Aside from the obvious point of, I really wouldn’t recommend Veeam having a repo on a VMDK…

You could create a backup job per VMDK, as multiple jobs, that would work. You’d just use an exclusion to process the VM but just one disk for example.

 

Depending on how many disks and their size you could maybe set a job that processes all disks apart from your huge VMDK, then a separate job that only processes that disk. As you’ve said it’s an imaging server, so it’s not like you need application aware processing for MSSQL recovery or anything else specifically that requires a different solution.

 

Is your VMDK being backed by a SAN? If so can you just present an iSCSI LUN directly to the Veeam Repo instead?

This is exactly what I was thinking. Just setting it up using initiator in the OS. I think that is going to be the best course of action because this server is only going to get bigger. It is SAN.

Userlevel 7
Badge +3

I assume that you are presenting the storage to the hosts that play the veeam vm the storage over SAN, iSCSI , NFS or whatever… right? isn't local disk on the host.
Why don't you present that storage, as CIFS for example, as repositories, and then create a SOBR with all of them?
depending on the Max capacity of the storage you are using to present the repos, you will have to size them to be compatible, for example, 5 CIFS shares of 20TB each. or NFS repos of 20TB each.
hope this helps.

It’s set as a VMDK currently. I was not aware of that. I think we might end up going the route that @MicoolPaul suggested. I had the same idea and my boss liked it. What you’re saying would also probably work as well. The direct iSCSI connection would be the easiest, I think?

Userlevel 7
Badge +20

I assume that you are presenting the storage to the hosts that play the veeam vm the storage over SAN, iSCSI , NFS or whatever… right? isn't local disk on the host.
Why don't you present that storage, as CIFS for example, as repositories, and then create a SOBR with all of them?
depending on the Max capacity of the storage you are using to present the repos, you will have to size them to be compatible, for example, 5 CIFS shares of 20TB each. or NFS repos of 20TB each.
hope this helps.

It’s set as a VMDK currently. I was not aware of that. I think we might end up going the route that @MicoolPaul suggested. I had the same idea and my boss liked it. What you’re saying would also probably work as well. The direct iSCSI connection would be the easiest, I think?

Yep, use iSCSI with MPIO configured so you can benefit from multiple NICs, you may need to adjust your networking to allow the VM access to the iSCSI network if it’s separate, but even if you have dedicated NICs on VMware that are the only ones able to connect to the iSCSI network, you can still create a VM port group on the vSwitch to achieve the connectivity.

 

This could also help you in any longer term migration, you could then stand up a physical Veeam repository if you wanted, leveraging iSCSI to migrate the LUN from your VM that’s a repo, to a physical.

 

I don’t know your environment enough to tailor these recommendations to be 100% optimised for your best current & long term needs and IT strategy, but these should help you moving forwards.

Userlevel 4
Badge

hi,

I like to bring up the downsides if you migrate from vmdk disks/vmfs datastore (or many) to an iscsi SAN disk or an nfs mount etc.; migrate to new hardware for example will be a big topic in the future and you much more limited without the hypervisor layer.

I would recommend you to stay with vmdks/vmfs datastore but use more and split the data. also follow the @MicoolPaul guide and redirect the snapshots to another datastore (but please use the same disk tier like ssd to guarantee performance).
But beside that think about “Backup from Storage Snapshots” for processing; of course you have to have a supported storage box and a proxy which is capable of direct SAN mode but I can tell you this will so much shrink the lifetime and size of the vmware snapshots.

Agents will be also a solution (and if the system is a windows server there is also the option of “Backup from Storage Snapshots” too..) and its easy to look into this solution if yor network can handle it..

Best regrads
Daniel

Userlevel 7
Badge +20

Agree @ger.itpro, and to clarify I’m only suggesting moving Veeam Backup repositories off of VMDK, not the production VMs 🙂

Userlevel 7
Badge +7

Agree @ger.itpro, and to clarify I’m only suggesting moving Veeam Backup repositories off of VMDK, not the production VMs 🙂

 

Exact @MicoolPaul 

I strongly discourage depositing backups on a vmdk virtual disk.
Adding an additional layer of virtualization to the data backup may become counterproductive in case of VMFS datastore corruption.
 

Userlevel 7
Badge +7

edit:

Wrong post LOL

Comment