Solved

Microsoft 365 backup repository blocksize recommendation


Userlevel 2

Hi all,

we are running a Veeam Backup for Microsoft 365 environment. We are using NTFS volumes formatted with 4KB cluster size, according to Veeam best practices (s. https://bp.veeam.com/vbo/guide/buildconfig/proxy-repo.html). Every job has it’s own repository and multiple repositories reside on the same NTFS lun.

Now we are running into the problem, that some luns reach 16TB size. Due to the cluster size of 4KB we are not able to expand further.

We now need to migrate some of the repositories and I have two questions about that:

  1. Should we migrate to luns with a bigger cluster size? What are the side effects of that?
  2. What’s your preferred way to migrate? The best way I know is to copy the repo contents with robocopy, create a new repo pointing to the new location and then change backup job target to the new repo. Unfortunately that implicates stopping Veeam services while copying, because otherwise all of the jet dbs are locked…
    Is there a better way? Or will be in v6?

Thank you for your input!

Best regards

Christoph

icon

Best answer by Chris.Childerhose 17 February 2022, 14:30

View original

22 comments

Userlevel 7
Badge +20

Hi!

 

You’ve got multiple ways forward, you could look at offloading to object storage, and speaking honestly, object storage is the best approach for VBO365 anyway.

 

Ive used ReFS 64K without issues but you do need to disable the integrity feature to [UPDATE: it is not to avoid potential corruption as I originally wrote, but to mitigate a potential performance impact. [/UPDATE]

 

As for migrating data, there’s a KB on this that shows how to use a Cmdlet to achieve this (https://www.veeam.com/kb3067)

Userlevel 2

Hi Michael,

 

thank you for your answer!

 

Then I would assume using NTFS with 64KB cluster size would be the best option. At least, when object storage isn’t an option at the moment. ReFS does not give us any advantage compared to NTFS in this scenario, correct?

 

What is your experience with Move-VBOEntityData? We did some migrations with that in the past, but there were always some users, sites, etc. that couldn’t be migrated. That’s why I’m not sure if we should use that here, because I don’t want to end with job’s data spread over multiple repos… ;-)

 

Best regards

Christoph

Userlevel 7
Badge +20

Hi Christoph,

 

Can I check are you using separate repositories for each task (Exchange, SharePoint, OneDrive, Teams), I find this beneficial to prevent the need to have larger than 16TB for some organisations (not all of them of course!).

 

The Cmdlet is great, but does have some key limitations:

  • You can’t do any data moving with object storage as the source, but IIRC you can move from local storage to object storage as a destination, local to local is fine.
  • You won’t see space recovered as the database will contain whitespace instead (VBO365 uses the JET database so if you’ve ever dealt with this in the Microsoft Exchange world, this will be bringing back screaming nightmares to you right about… now!)

Full Cmdlet details available in the VBO365 PowerShell reference doc here: https://helpcenter.veeam.com/docs/vbo365/powershell/move-vboentitydata.html?ver=50

Userlevel 7
Badge +20

Hi!

 

You’ve got multiple ways forward, you could look at offloading to object storage, and speaking honestly, object storage is the best approach for VBO365 anyway.

 

Ive used ReFS 64K without issues but you do need to disable the integrity feature to avoid potential corruption.

 

As for migrating data, there’s a KB on this that shows how to use a Cmdlet to achieve this (https://www.veeam.com/kb3067)

Offloading is probably the best option but if you cannot then NTFS with 64k block sizing will work. I have been told always use NTFS due to the Jet DB as ReFS is no good there. It works yes as we have some volumes but are migrating to either NTFS or Object Storage which is going to be the way forward with VBO.

Userlevel 2

Thank you all for your input!

We are migrating to 64KB NTFS LUNs now and will consider to add object storage in the future.

Userlevel 7
Badge +20

Thank you all for your input!

We are migrating to 64KB NTFS LUNs now and will consider to add object storage in the future.

Thanks for the update! Don’t forget to mark this question as answered to help others in the future! 🙂

Userlevel 5
Badge +2

Hello,

@MicoolPaul where is this coming from?

Ive used ReFS 64K without issues but you do need to disable the integrity feature to avoid potential corruption.

 

It confused a customer and I cannot find a source for that.

 

Thanks,
Hannes

Userlevel 7
Badge +20

Hello,

@MicoolPaul where is this coming from?

Ive used ReFS 64K without issues but you do need to disable the integrity feature to avoid potential corruption.

 

It confused a customer and I cannot find a source for that.

 

Thanks,
Hannes

Hi @HannesK it comes from https://bp.veeam.com/vbo/guide/buildconfig/proxy-repo.html

 

Unless this source is out of date?

Userlevel 5
Badge +2

Hmm, where does it say anything on “corruption”?

I agree, my colleagues should avoid adjectives like “error-prone”. The “error-prone” refers to “reconfiguration” and not to REFS as far as I see

Userlevel 7
Badge +20

Hmm, where does it say anything on “corruption”?

I agree, my colleagues should avoid adjectives like “error-prone”. The “error-prone” refers to “reconfiguration” and not to REFS as far as I see

Agreed, I reviewed the thread you linked in your previous post (didn’t spot the link initially). I clearly mis-remembered the reason why we disable integrity streams, probably in part due to integrity streams being focused on corruption prevention! Thank you for highlighting this to me so I can better advise moving forwards.

 

I’ll edit the previous post and to clarify for everyone here, it’s not corruption reasons but performance reasons as per @HannesK’s linked thread that you would seek to disable ReFS Integrity Streams when working with VBM365.

 

To post @Mike Resseler’s comment from the thread:

Disabling integrity streams is recommended, but the reason for the recommendation is performance. 

If integrity streams are enabled, all write operations become allocate-on-write operations. This avoids any read-modify-write bottlenecks since ReFS doesn’t need to read or modify any existing data but file data frequently becomes fragmented, which delays reading.

Depending on the workload and underlying storage of the system, the computational cost of computing and validating the checksum can cause IO latency to increase which will be the case with a running database. So I would advise to turn it off.

Userlevel 5
Badge +2

perfect, thanks. I also asked my colleagues to clarify / simplify wording

Userlevel 7
Badge +10

Hey, Veeam Community. I just found this thread and I wanted to ask if there’s any reason why it’s either 4K or 64K cluster size. Smaller cluster sizes help to minimize wasted space when storing smaller files. 8K cluster size would allow 32TB limit and 16K would allow 64TB. There would be use cases out there for tweaking the cluster size to balance maximum volume size with storage efficiency. What do you all think?

Userlevel 7
Badge +20

Hey, Veeam Community. I just found this thread and I wanted to ask if there’s any reason why it’s either 4K or 64K cluster size. Smaller cluster sizes help to minimize wasted space when storing smaller files. 8K cluster size would allow 32TB limit and 16K would allow 64TB. There would be use cases out there for tweaking the cluster size to balance maximum volume size with storage efficiency. What do you all think?

I agree with this for sure but it also depends too on if you are using block vs object for the storage.  If going direct to object then using 4k is fine for the metadata stored local but if you are using block then that is when you need to look at things more diligently and test.

Userlevel 7
Badge +8

Well if you think on “average” the size of files in Veeam Repos, the amount of space used is going to be so minimal, and the performance gains are going to be so large that it’s a no brainer to go 64K REFS.

 

 

 

 

Userlevel 7
Badge +20

Well if you think on “average” the size of files in Veeam Repos, the amount of space used is going to be so minimal, and the performance gains are going to be so large that it’s a no brainer to go 64K REFS.

 

 

 

 

But not with VBM365 as they don’t recommend ReFS but rather NTFS due to the JetDB.  That is what I think Hin was referring to.

Userlevel 7
Badge +8

Well if you think on “average” the size of files in Veeam Repos, the amount of space used is going to be so minimal, and the performance gains are going to be so large that it’s a no brainer to go 64K REFS.

 

 

 

 

But not with VBM365 as they don’t recommend ReFS but rather NTFS due to the JetDB.  That is what I think Hin was referring to.

Hin is correct, I was responding to HangTens more generic question about block size and saving space. 

 

It goes back to “This is the best practice, except when” , or “Always do this, unless”

 

Sure if you have a ton of super small files and have a ton of I/O smaller block sizes make sense.

 

I’d also add though a 16TB cap on your volumes is going to get old quick when we are dealing with backups. As things grow I can’t tell you how annoyed I get when a coworker sets a blocksize to 4k on a file server lol.  

EVERY TIME 

  • Customer says running out of space, or volume offline as it’s out of space.
  • Expand drive in VMware because customer wants space. 
  • Expand drive in windows. UGGGHHH!!!!!
  • Robocopy 16TB and DFS to new location on properly sized blocks.

I check now obviously after this got me a few times. haha. 

 

 

Userlevel 7
Badge +10

I think there are too many “H” names in this thread. To clarify...I am both Hin and HangTen416. Thanks for the replies. I thought it was worth discussing the possible use of 8K and 16K (and 32K) as options for NTFS repositories.

Userlevel 7
Badge +20

Again to echo an earlier point. NTFS is only recommended since you don’t need to disable integrity streams. It’s because they don’t exist on NTFS vs disabling them on ReFS, and if you forget, it’s only a performance penalty potentially you’ll encounter.

 

With that in mind I see no harm in going ReFS, and I’d do ReFS 64k to maximise partition size.

 

A single JetDB file can grow to 64TB, and Veeam recommend volumes no larger than 200-300TB in size, so you’d be maxing out way too prematurely by going 4k.

 

I’ve yet to find any documentation from Microsoft that discourages ReFS or suggests using smaller block sizes. If you’re gonna work at the scales that warrant it, it’s better to just ensure you’ve got sufficient IO performance to handle any IO amplification?

Userlevel 7
Badge +8

I think I'm over 300TB. I’m a rebel  :)

Userlevel 2

I know this is an old-ish thread… and this video is also a little bit on the old side, but as it pertains to the original question… at least according to this video on VBO365 best practices… at about 58 minutes into the video… go with 4k NTFS

 

 

Based on their reasoning, and as has been noted several times in this thread, it's all centered around performance. And, while that video doesn’t discuss the 16TB size limit associated with 4k… they do mention several times to break up your data into multiple jobs based on data types (SharePoint, Exchange, etc.) and if needed even further by company characteristics (regional offices, company departments, etc.). There are also max recommended number of objects and other job characteristics listed in that video which help answer “how big is too big for one job.” That said, it’s reasonable to say that if you are slicing up your environment wisely and using an individual repo per backup job you might be hard pressed to hit the 16TB limit.

There are mentions of corruption issues… but that’s due to manually purging files or using object storage tiering schemes that VBO365 doesn’t work with. It’s never said to avoid ReFS or anything else based on corruption… just performance issues. Also, throughout the video they make mention of a few limitations that VBO365 had at the time which are now (mostly) resolved… but I suspect the NTFS 4k recommendation would still stand as of version 6 since it centers around performance and not about functionality. Happy to be proven wrong though. 😄 And… it is just a recommendation. One size doesn’t fit all… so do as you please so long as you do so understanding what you are potentially sacrificing by deviating from best practice.

Userlevel 7
Badge +20

That video I have watched a few times as it is very good 👍

Userlevel 7
Badge +8

Really fantastic vid!

Comment