Does the system have a RAID controller that the drives are connected to? That would be the first question but if you want RAID10 for the repo then this setup would be hard to accomplish this. You would be better suited to install another drive for the OS and use the four 1TB drives for the RAID10 if that is what you want. RAID10 requires an even number of drives to work.
Hi,
This is definitely outside of normal Veeam conversation but happy to jump in and help.
You haven’t mentioned if you have a RAID controller or not, are you using a hardware RAID or software RAID?
If hardware, you can create your RAID 10 and then present two partitions within this, you would then create a mount point for your second partition and you’d be good to go.
If software, you need to create the partition in advance. The problem you’ll have is you need to create the software RAID prior to installation if you wish to put your OS on the software RAID.
The tool you’ll need to create software RAIDs is mdadm.
If you’re using software RAID you’ll be out of luck getting any official support for RAID 10 as /boot can’t use any software RAID other than RAID1. (Installation/SoftwareRAID - Community Help Wiki (ubuntu.com))
Let us know if we’re talking software/hardware RAID and we can help further hopefully.
Hi,
This is definitely outside of normal Veeam conversation but happy to jump in and help.
You haven’t mentioned if you have a RAID controller or not, are you using a hardware RAID or software RAID?
If hardware, you can create your RAID 10 and then present two partitions within this, you would then create a mount point for your second partition and you’d be good to go.
If software, you need to create the partition in advance. The problem you’ll have is you need to create the software RAID prior to installation if you wish to put your OS on the software RAID.
The tool you’ll need to create software RAIDs is mdadm.
If you’re using software RAID you’ll be out of luck getting any official support for RAID 10 as /boot can’t use any software RAID other than RAID1. (Installation/SoftwareRAID - Community Help Wiki (ubuntu.com))
Let us know if we’re talking software/hardware RAID and we can help further hopefully.
Some really good points here as well. Thanks @MicoolPaul
Does the system have a RAID controller that the drives are connected to? That would be the first question but if you want RAID10 for the repo then this setup would be hard to accomplish this. You would be better suited to install another drive for the OS and use the four 1TB drives for the RAID10 if that is what you want. RAID10 requires an even number of drives to work.
There is no hardware RAID controller; just the 4 SATA ports of the mainboard (P8H6). So there is no additional port left for the OS.
Hi,
This is definitely outside of normal Veeam conversation but happy to jump in and help.
You haven’t mentioned if you have a RAID controller or not, are you using a hardware RAID or software RAID?
If hardware, you can create your RAID 10 and then present two partitions within this, you would then create a mount point for your second partition and you’d be good to go.
If software, you need to create the partition in advance. The problem you’ll have is you need to create the software RAID prior to installation if you wish to put your OS on the software RAID.
The tool you’ll need to create software RAIDs is mdadm.
If you’re using software RAID you’ll be out of luck getting any official support for RAID 10 as /boot can’t use any software RAID other than RAID1. (Installation/SoftwareRAID - Community Help Wiki (ubuntu.com))
Let us know if we’re talking software/hardware RAID and we can help further hopefully.
As this is only a test ground no official support is fine. The use case afterwards will definitely include a hardware RAID controller. If RAID 10 or 5 or 6 will then have to be decided based on cost/performance etc.
For the test here I don’t require RAID10 for the OS itself. I’m even fine, with spending one HDD for the OS alone. But with only 3 HDD not RAID10 is possible. I might test RAID5 or just put all 3 HDD into a single volume.
Hi @omfk ,
In which case if it’s just test (and a device loss doesn’t mean you’ll have problems), to simulate write performance you could do two disks in a RAID 0, then you read performance should be better than that. Though hardware RAID controller would skew those numbers anyway. RAID5 would then give you a nearer read performance to your RAID 10 idea.
Could be handy if you could tell us more about the purpose of this test, is it just you want to get an immutable backup test in place? or are you trying to get any performance characteristics from the RAID array?
Hi @omfk ,
Could be handy if you could tell us more about the purpose of this test, is it just you want to get an immutable backup test in place? or are you trying to get any performance characteristics from the RAID array?
Hi,
apart from short bad feeling a device loss will result in zero problems as the data used for this test are already backed up by a different VBR server. Also performance is no issue at all. As you stated it is just a test for immutable backups.
The only topic to be achieved is a repository > 1TB, so I have to “combine” at least 2 HDD.
Hi @omfk ,
Could be handy if you could tell us more about the purpose of this test, is it just you want to get an immutable backup test in place? or are you trying to get any performance characteristics from the RAID array?
Hi,
apart from short bad feeling a device loss will result in zero problems as the data used for this test are already backed up by a different VBR server. Also performance is no issue at all. As you stated it is just a test for immutable backups.
The only topic to be achieved is a repository > 1TB, so I have to “combine” at least 2 HDD.
Use one disk for OS and JBOD/RAID0 the others mdadm is the tool for this!
Hi @omfk ,
Could be handy if you could tell us more about the purpose of this test, is it just you want to get an immutable backup test in place? or are you trying to get any performance characteristics from the RAID array?
Hi,
apart from short bad feeling a device loss will result in zero problems as the data used for this test are already backed up by a different VBR server. Also performance is no issue at all. As you stated it is just a test for immutable backups.
The only topic to be achieved is a repository > 1TB, so I have to “combine” at least 2 HDD.
Use one disk for OS and JBOD/RAID0 the others mdadm is the tool for this!
Or of course RAID5, just thinking of getting your test data written fast
Hi all,
after some non-Veeam related difficulties the test systems works. All configurations could be done with the initial Ubuntu-installer and the veeamhubrepo script.
At the moment there is only one issue which puzzles me. I have 3 TB drives in a software RAID 5. The overview of the repository shows me:
capacity: 1.8TB
Free: 706GB
Used space: 2.2TB
On the repository are 3 files:
a vbk from 13.08. 1.08TB data size, 838GB backup size, retention R
a vib from 14.08. 825GB data size, 300GB backup size
a vbk from 15.08. 1.81TB data size, 1.05 backup size, retention W (weekly GFS)
Why do I have more used space then the given capacity?
I assume you are using XFS as filesystem, then the full backups (.VBK) are using block cloning and linking. With this these files are using much less physical storage and your repository can contain a lot more data than the physical capacity.
In addition to the post from @JMeixner you can check the following blog post on ReFS or watch the explanation on Youtube:
https://www.veeam.com/blog/advanced-refs-integration-coming-veeam-availability-suite.html
ReFS and XFS both have the same capabilities in regards to Veeam.
Yes check out both of these. Veeam does not report the true numbers in the console when using these file systems.
All of the above comments from @JMeixner @Chris.Childerhose and @regnor are spot on, I just want to add it’s advised when you size your production repository to ignore the benefits of ReFS/XFS. You wanna be sure you’re not relying solely on the space savings and then you couldn’t create a new active full chain if you needed, especially if the data is immutable.
All of the above comments from @JMeixner @Chris.Childerhose and @regnor are spot on, I just want to add it’s advised when you size your production repository to ignore the benefits of ReFS/XFS. You wanna be sure you’re not relying solely on the space savings and then you couldn’t create a new active full chain if you needed, especially if the data is immutable.
Definitely another key thing to add for sure. Otherwise trouble.
All of the above comments from @JMeixner @Chris.Childerhose and @regnor are spot on, I just want to add it’s advised when you size your production repository to ignore the benefits of ReFS/XFS. You wanna be sure you’re not relying solely on the space savings and then you couldn’t create a new active full chain if you needed, especially if the data is immutable.
That’s a good point. The space savings are a nice benefit and you can achive higher retentions, but I also wouldn’t calcualate it in the sizing.