Skip to main content

I am often tasked with creating storage heavy systems in terms of throughput and I/O.

Some of the proprietary video systems create lots of IO and the users require a lot of space. 

 

We all know M.2 NVMe drives are perfect for the task, but what happens when you need larger volumes?

 

Software raid or using Storage Spaces is easy, but the performance penalties are not small. Hardware raid performs much better, but what about when you want to take it to the next level? 

 

I purchased a GRAID SupremeRAID SR-1001 and a LQD4500 (Honey Badger) to do some testing. 

The SR-1001 is a GPU accelerated raid card, and the LQD4500 is 8 M.2 drives on a PCIe card.

 

You can see after installing the LQD4500, device manager shows it has added 8 Samsung disks to the 3 I already had in the system. 

 

 

 

For the first test, I created a 28TB volume using storage spaces with default settings and ran a default test in CrystalDiskMark.

 

Having an idea of the performance you should be expecting can really help you tune things or have the appropriate setup when designing systems. I know that these drives can perform significantly better than the numbers above. 

 

Originally I started testing on my personal PC and had poor performance. Without getting too far down the rabbit hole, I am putting 8 drives into a PCI-e slot. The Liquid card is a gen4 card. the GRAID SR-1001 is a PCIe gen3 card. Based on the below chart, you need a full x16 lanes to run a gen3 PCIe card at 32GB/s. if it uses x8, your max speed will be 16GB/s

I was able to get over 28GB/s using individual drives, but as soon as I’d use the GPU raid card, my performance would drop. This was due to the Intel processor not having enough PCIe lanes to preform the task. 

 

 

 

HWMonitor is a great application for looking at the devices in your system and seeing how many lanes and what generation they are running at. Many Motherboards will have 1 or 2 gen 5 PCIe slots and add gen 3 or 4 slots for cost savings. If you are heavy into storage, or utilize many PCIe cards, remember they share lanes and buses. 

 

For the following tests, I am using a Lenovo P8, with a Threadripper Pro 7955WX. Because it has 128 PCIe Gen 5 lanes, I can use many cards and drives without the CPU/Motherboard causing a bottleneck. (GPU, Raid card, Liquid NVMe, PCIe 25Gb network card)

 

The first step was sorting out the drivers. If you are using an RTX GPU, make sure to use the RTX drivers and install them before the GRAID driver. Seeing as this should be in a server, having a gaming GPU is usually not be an issue. GRAID support was unbelievable helping me and assisting me testing drivers and working with their engineering department to make sure we got it going. There were also a few Nvidia versions that were not compatible as the SR-1001 is a Quadro card and my GPU is a RTX3070.

 

SETUP

Here is a quick view of the disks from the Liquid card.

 

Using the graidctl command you can generate a list of options with the SupremeRAID adapter.

 

First we want to create a “physical_drive” and select the disks to have the SR-1001 control.

 

The drives will totally disappear from device manager and disk manager in Windows after you create the physical drive. The GPU takes total control of them and  prevents you from doing anything to them which could cause an issue. 

 

 

The following disks were already in my system before starting the test. 

 

 

Let’s list the drives and see what it shows as in the graidctl app. 

 

 

Next, we will create a drive group and select what type of RAID you want to use, and what drives you want to add to the array.

 

I created 1 large drive using all of the disks.

 

Next, we will create the virtual drive. From here, you select the drive group (in our case there is only 1)

 

 

Windows will now see it as a single 28TB disk attached to the system.

 

 

After formatting it, it appears as a regular hard drive.

 

 

Running a basic benchmark, you can see the IOPS have increased much higher than using storage spaces. 

 

Without doing anything at all, that is a significant increase in I/O and throughput, but lets run some other tests and push these drives. 

 

For the following test I increased the blocksize to 8 MiB with queue depth ranging from 8 to 64 to  generate “Hero” type numbers.  27.8GB is some serious transfer speed. 

 

Running the Peak Performance test, I was able t generate 1.3 million IOPS in the random 4k test for Reads 700k for writes. 

 

 

 

 

 

Conclusion

 

The Liquid cards get hot. These are not meant for a workstation. If you plan to get one I recommend a large server with fans at max speed. While hitting 28GB/s (especially if you are using it as independent disks) is easy, you will need many fans to keep it cool. Keeping them cool, the density of 28TB per PCIe slot could create a large fast storage array using one of the slots for the raid card.  The performance was as advertised and is a beast of a card. 

 

The GRAID adapter can work with any physical disk and shines using M.2 NVMe drives. It’s rated up to 80GB/s at 1M Sequential reads, and 6M IO’s with 4k random reads.  Their flagship can hit 260GB/s throughput and 28 million IOPS!

 

There is no doubt with GPU accelerated raid, you can increase raid performance and maximize your storage thruput and IOPS. The main issue will be finding workloads to require this.  I plan to do some additional tests in my Veeam LAB for fun to see the speed differences on some of the tasks and functions. 

 

Storage bottlenecks are a thing of the past with a properly designed system. Just make sure you have the workload before you overshoot your requirements and budget. 

 

 

Some very interesting testing there Scott.  Love to see this kind of thing within the community as it helps to know what is out there.  Definitely some fast speeds there. 😎


Good work, thanks for sharing. Wow, awesome speed 😍


Some very interesting testing there Scott.  Love to see this kind of thing within the community as it helps to know what is out there.  Definitely some fast speeds there. 😎

I got deep into the weeds on this one using multiple benchmarking tools, with/without write cache, and many other tests.  This was a slightly condensed version, but either way, the GRAID card rocks. The Liquid card is a beast too, but I wouldn’t want it to sustain some of the temps I hit in this workstation for too long. It’ll end up in a lab server shortly with more fans. 


Good work, thanks for sharing. Wow, awesome speed 😍

When I had some of the caching enabled I was able to get even more!!  I just didn’t include in the write up as it’s not a “real” workload. I didn’t push past this because it didn’t seem necessary.

 

I’m excited to add additional drives. 80GB/s here we come lol. 

 


Comment