My advice is to look at the entire architecture and design a plan before picking specific storage.
Step 1 - Find out how much space you need.
This is the most critical step. Combine it with how many sites you plan on using as well. Is the second location going to be another datacenter or cloud?
Many places have different policies such as 30 days at the main site, and then GFS at a secondary site. If you go Immutable, (which everyone should) keep in mind the additional restore points that may be kept to remain in policy.
After using the Veeam Calculators, make sure to add some room for growth, and keep in mind an 80% max is usually recommended for storage due to overhead, metatdata, swing space etc.
Step 2 - What are your performance requirements.
As stated above, dedupe shouldn’t be your primary backup, and can take a long time to restore. If you plan on doing instant restores, sure backup in large scale, and other tasks that use lots of IO, there may be performance issues. Dedupe appliances are great for saving space and having long term retention. This is a great secondary option.
I have NVMe storage at my primary site and you can even put that in a SOBR with some slower disk if you size your pools correctly. This is great when I want to provide SureBackup labs for our apps teams to do testing of updates on their systems. I have slower tired disk at another site on immutable storage holding GFS copies for long term retention.
Step 3- Figure out your budget.
It’s easy to get swept away by a good sales pitch. Make sure you know your budget so you can save time having a ton of meetings trying to find the right solution. It’s easy to know a few vendors that might be out of your price range before even asking. Also, if you have a price in mind, they will show you the solutions to fit your needs.
Step 4 - Features.
Think about things from firmware updates, MFA on the storage, and other features you may need.
Veeam works with anything from servers full of disks, to large NVMe all flash arrays. my workload involves many tape drives streaming at full tilt. having repositories and proxies with multiple 32Gb fiber and 25Gb networking in LACP was required to not saturate the links. There is always a bottleneck, but looking at your disk latency, networking and FC ports it helps to see what your requirements are as well.
At the end of the day, disk is disk, but keep those things in mind while choosing. I use a combination and target the fast stuff for things that require performance, and the slower stuff for long term and size.