You can also try using IOMeter as well for testing I/O - Iometer project
Pretty useful tool and does a good job of testing.
I’ve used that same KB and confirm that is the best way to go about it. Using the Veeam KB for Windows I recommend as well as it will give you an accurate representation of how your backups will perform.
There are many other benchmark tools out there and you can change many settings to show your max iops or throughput, but that is more for hero numbers and not what your backup jobs are actually going to do.
To go one step further, networking, CPU, source storage, VMware, and other limitations also add to bottlenecks during jobs.
I have a few flash SAN’s in my Veeam environment and was able to get some crazy numbers during benchmarking. In reality, my backup jobs run quick, but I’m hitting the limits of networking and fiber long before the storage even starts to break a sweat.
My production storage has worked harder than it ever has running backups by several times though.
If you want to test max iops try with a real low block size and maximize the threads/tasks. What type of storage are you using?
I’ll add, if you are using a SAN, even spinning disk, 1 server isn’t enough to push it even with 32Gb fiber and several ports in most cases. I split my flash SAN’s up to have several repository servers so I can add proxies as I evergreen other servers. Running simultaneous benchmarks on both servers still didn’t even make them break a sweat. At some point you have to choose performance vs ongoing cost though.
The Max Fragmentation Read Test is good because it will show you the worst case numbers you will ACTUALLY get doing restores on a fragmented backup.
IOmeter is a another great tool that @Chris.Childerhose recomended and one of my other “goto’s” for Windows.
I should also ask are you just doing this out of curiosity, or are you having performance issues?
First of all thanks for you answer @Scott.
Im having some troubles with restore tasks and now Im trying to investigate the the possible causes.
This is a Linux server running on a 10GB network. This a virtual machine (yeah I know this ) with some mouting point coming from the VMware infraestructure. These LUNs on VMware environment are coming from a storage that I still do know all specs.
At this moment I’m doing some fio tests. Thinking on I/O statistics…
What can I consider a good and bad numbers?
Depends on what you are using for storage.
So the repo is a linux VM, with 10Gb nic, but what is the storage? Is it a SAN, is it iscsi, fiber channel? is it local to the ESXI host?
How is it presented, how many disks, are they spinning disks, SSD or NVMe?
Good/Bad numbers don’t exist, each disk will have specific amount of possible IOPS. more disks = more IOPS. Keep in mind, the numbers on your storage don’t actually matter if it’s not causing issues.
If you have some of the above information what are the results you are seeing?
On the other side of things, what is the transport mode being used? What type of restore are you doing? Is it a full VM, file level, VMDK level?
More often than not it’s going to be something like transport mode using the VMware management network instead of the 10Gb or a config issue if it’s “restores” and not backups also.