- Are you using HDD or SSD?
For the Direct attached storage, HDDs - How many disks do you use?
I use 6 6TB Disks - One big RAID or multiples?
One big Raid
Finally, why did you choose this RAID level?
I chose Raid 5 due to resiliency, eficiency, and in wors case scenario that two disks fail and I lose the array, I do have another copies of the backup, this is the Local Repo, then, Backup copy and cloud, following the 3-2-1.
cheers.
Hey Rasmus -
On one of my servers I have in “isolation”, I use 8 local SAS 15TB HDD storage drives and perform daily immutable backups to it. I created one RAID 60 on all disks in this Host for the performance of it as well as dual redundancy.
Best
For any physical boxes we use like Veeam appliances or VHRs we use HDD in various sizes with RAID6 for dual redundancy. That is the way the company has done it for years.
For physical Servers we use RAID 6 with usually 10 or 12 x HDD’s and a separate raid 1 boot volume.
The storage Volume is provisioned as a single logical drive.
Hello,
HDD here for all of my customers.
Raid 6 99% of the time.
If more than 12/14 disk I create 2 raid 6 volumes.
Last customer is 2 raid 6 with 14 x 16to and I create SOBR on veeam.
- Are you using HDD or SSD? - HDD, for density
- How many disks do you use? 50-60 per box
- One big RAID or multiples? Multiple, guidance from hardware vendors to us has always been 2x RAID controllers and a max of 30 spindles per RAID controller as the controller ends up the bottleneck.
This is for DAS boxes like HPE Apollos. I like having a flash landing zone where possible but where workloads don’t require block storage then since v12 we’ve started to talk of the performance tier being object so we don’t have these issues of multiple SOBR extents per box and carving up resources we can just use proxies as gateways direct to object
DRAID6 on our SAN’s. With the size of disks and flashcore modules these days, fast rebuild is a pretty big requirement. I don’t want a failure during a rebuild.
As someone who worked in the storage industry for many years, disk rebuilds would often show you if another drive was on the verge of dying. That or a power outage
For smaller boxes, Raid6 is pretty common. A lot depends on how many disks I have, and the need for redundancy/performance. Even for SAN, it’s more NVMe and SSD these days so performance is becoming less of a concern.
“It depends”.
I don’t generally use DAS with a purpose-built server and am instead using local disks, but I don’t think it terribly matters. In most cases I’m using a RAID 5, but for those clients that want extra redundancy or I have extra disks to spare, RAID 6 is utilized. In the past I had used RAID 10, but I found that I didn’t really need the extra performance and capacity was a higher priority. I haven’t really looked into utilizing RAID 50 or 60 but I suppose that could be an option as well. Generally with my smaller boxes (up to 40ish TB), RAID 5 or 6 suits just fine.
Hi @haslund,
- Are you using HDD or SSD? I would recommend HDD 15K
- How many disks do you use? up to 16 HDD including a hot spare disk per set or global hot spare.
- One big RAID or multiples? If the RAID controller allows, multiple RAID set
Finally, why did you choose this RAID level
RAID6 with hot spare, because to rebuiid a RAID5 is too time consuming, takes days, some times. using R6 will avoid the situation to lose another disk during the R5 rebuilnd processes.
I’ll also consider block size, stripes, BBCW, FBWC and 90% to 95 % off controller cache configured for write operations to get the most performance.