Skip to main content

Hi guys, 

i know that in v11 is not recommended use the dedup repo as primary backup storage, is there any news in v12 ? The customer wanna buy 2 x Quantum DXI dedup hw appliance (each site one dedup appliance) and i3 scalar tape library. From my point of view, the best pracices is - backup to ntfs/refs/xfs repo as primary backup and after create a 2 backup copy jobs to each dxi appliance and after create a tape backup job. 

BUT.

The customer wants a send the primary backups to DXi and after to tape. its possible or its bad plan ? 

Thx.

Tom. 

It’s a bad plan, purely because the customer isn’t planning to restore within a swift time frame. If they’re shopping on the basis of V12 and ReFS/XFS isn’t meeting their needs for whatever reason, they’d be better off looking at object storage (but they need v12 for this to be a primary repository).

 

I can’t state enough how painfully slow the restores are gonna be, as since they’re looking at dedupe appliances , they must have a lot of data!


It is definitely possible, but the performance will not be the same as sending to ReFS/XFS or even NTFS.  It is recommended to send to block then use Dedupe appliances for LTR versus primary repo.  I know even with using DDBoost for Data Domain the speeds are still not the greatest and especially doing Synthetic Full backups too is painful.

If this is what the budget allows, then you need to set expectations that backups may or may not complete in the window.  Copy jobs and tape jobs can run off schedule so no issues there.


Agree with these guys...deduplicating appliances, while great for storage, are just plain slow.  Best to run backups to local/block storage as a landing point and then offload to the dedupe appliance and tape drive.  If they don’t think they would need to restore from a very old backup, it may be possibly keep the initial repo relatively small with a shorter retention policy and then keep more of the long-term restore points in the DXi.  That said, still need to set expectations that restoring from one of the older backups in that architecture is going to be slow, but more recent backups at least would be acceptable.


I agree, the magic of deduplication pays the price of performance.

With my little of experience, when you setup those appliances, you can choose if you want to “enable” or “turn on” deduplication on each repository you create.

So for me, a good approach would be to setup the primary DXI with a none dedup repo for first / initial copy, and then a second repo, secondary copy deduped for better retention / space optimization.

The second DXI, even replicated from the primary one natively with quantum functionality, or also a backup copy job destination deduplicated for very long archive / secondary copy with full dedup capabilities.

Just a quick and simple idea.

cheers.


Hello @TKA , i was running DXi on large scale before xfs repo. It will works to copy to tape but it could slow because they are not optimize for read and the dedup adds its disadvantage. You will generate lots of IO to do the backup copy jobs on DXi, consider to schedule it outside the backup windows.

Please use the native replication jobs on DXi and not backup copy jobs, because Dxi will send dehydrated blocs to the other device. Backup copy jobs will need to ask the DXi to rehydrated blocs to send it and will generate useless load. 

You can schedule by a basic posh script to rescan your remote repo where the DXi is replicating blocs. Always use “Enable Directory/File Based replication to target” for the replication on DXi, snapshot for replication is very bad idea.


Agree with my colleagues, always a bad idea using deduplication devices as primary repositories…. I never recommended it using like this : the performance is bad.


Comment