Skip to main content
Solved

DataDomain: Convert non-Immutable to Immutable MTrees without pain


Hi,

Anybody aware of a migration path to shift from a DataDomain without immutability to Immutable MTrees on the same box ?

 

I guess the only way is to create a new repo and chain… and make some good calculations on free space to keep it in the same box.

7 comments

Userlevel 7
Badge +19

Hi @kristofpoppe -

As far as I know...yes, that is about the best way to go..as you’ve shared. Pretty much any storage solution would be the same way I’m thinking.

Userlevel 7
Badge +21

I am not aware of any migration paths for this and would recommend to start the new repo for this purpose.  Probably the easiest way forward.

Userlevel 6
Badge +6

Hi,

Anybody aware of a migration path to shift from a DataDomain without immutability to Immutable MTrees on the same box ?

 

I guess the only way is to create a new repo and chain… and make some good calculations on free space to keep it in the same box.

Hi all and @kristofpoppe. I switched on a customer side, from a regular DataDomain without Retention Lock/Immutability to enabled Retention Lock Compliance. This was with the release of VBR 12.1 in last December as i remember, since this version it’s official supported.

 

Beware, Retention Lock Compliance is the only supported method with VBR on a DD. Also Automatic RL on the DD shouldn’t be used.

I used the existing MTree, with almost a lot of data. Started from then with the following settings for this customer:

  • Immutablity within the DataDomain repository on VBR site 56days (we do only 56days normal retention, any other data are GFS (till 6 months)- and these are immutable for the entire duration of the rentention)
  • mtree retention-lock set min-retention-period 56day mtree /data/col1/VeeamBackup
  • mtree retention-lock set max-retention-period 6mo mtree /data/col1/VeeamBackup

As i remember only newly written data was set with RL, but so you don’t have to create a new chain or even a new MTree.

RL Compliance activated DataDomains are not easy to manage, it’s also a little tricky to increase capacity, if you have to (many steps to manage, move from the DDOS WebUI to CLI). Also you need an additional security-account to authorize many steps. 

 

Hope i could answer your question. best regards Markus

 
 

 

 

Userlevel 7
Badge +9

@Dynamic Markus, thank you very much for sharing your experience with this. We are running in exactly the same case. When I understand your explanation well, only new data had the Retention Lock, which implies we have to go to a full chain to have every restorepoint immutable.

 

Userlevel 6
Badge +6

Hi @kristofpoppe, yes - only new data.
Lets asume you will write a synthetic full on weekends - you are imo good to go for the new restore points and your chain, since then.

The old RestorePoints will be sometime aged out.. IMO easier as starting a complete active full, but this depends on your amount of data  and backup window.
greetings Markus

Userlevel 7
Badge +9

Thanks @Dynamic Markus, it’s indeed sometimes playing Tetris with the free space left on the device. All clear !

Userlevel 6
Badge +6

you’re welcome. But keep in mind, when you have capacity problems, it’s getting worse if you enable RL compliance… you have no option to delete these new files anymore (if they are in retention).

I would recommend to start with small steps regarding mtree retention-lock set max-retention-period on the DD.

 

Another hint: as we implented RL compliance with 12.1 - there was an warning within VBR after hitting the max-retention (false-positive / the retention was indeed correct). I had an open case (#7151045) regarding this and also discussed with Fabian/Mildur on a german VUG event.
Shortly after this, i received a registry value to hide it. I was told, this issue should be adressed in the next patch - so it should be in place with 12.1.2. 

 

Comment