Hi Yamaris,
So DAS is an umbrella term for storage that is ‘directly attached’ to a server, this could be via SATA/SAS or you could find it’s via iSCSI or Fibre Channel. The bottom line is that your operating system sees it as a local drive, vs connecting to an NFS/SMB endpoint via a server.
Veeam installs a component onto that server to interface directly with the storage as opposed to communicating with the storage via a network endpoint.
This doesn’t necessarily equate to a performance improvement by itself. But there are reasons why it can:
- Consolidation of servers roles: If you’re using the same server as a proxy & repository, you can avoid additional network strain by committing the data to disk via the same server that has read the data from the production environment.
- High-bandwidth storage connectivity: Direct Attached Storage could be via interfaces such as the 6Gbps SATA interface, 12Gbps SAS interface, 8/16/32Gbps Fibre Channel. Which all exceed standard 1Gbps production networks, and some options there exceed 10Gbps production networks. It’s also not uncommon to see a slower production network speed with a faster storage network backend, such as a 1Gbps production network with iSCSI on a 10Gbps network, or 10Gbps production network on a 25/40Gbps iSCSI storage network.
- Commonly NFS/SMB accessed storage is being delivered via a lower-end NAS, vs DAS to either a SAN or server with a much faster RAID controller, with battery-backed write-cache. The battery-backed write-cache is important for keeping the storage fast & responsive, whilst also protecting against data loss should power fail.
To make this more relevant to the storage you’re looking at, you need to connect the storage to a HBA (RAID Controller that supports external storage). You’ll want as powerful a HBA as you can get to ensure it can saturate the IO capabilities of your purchased storage. When expanding out DAS, you have two options, you can daisy chain identical/similar DAS storage options, subject to vendor compatibility, or you can connect different DAS devices directly to separate RAID controllers.
Pros for daisy chaining:
- Single, larger pool of storage, easier to set & forget via this approach.
- Handy for fewer PCI-E cards, especially when servers are constrained for space when retrofitting to existing infrastructure.
Cons for daisy chaining:
- Single RAID controller will increasingly become a bottleneck the more disks become attached to the controller.
- (Assuming dual SAS connections) The SAS connectors might reach maximum throughput prior to you reaching maximum disk IO utilisation when you increase the number of disks.
Pros for multiple HBAs:
- Increased performance potential vs a single HBA
Cons for multiple HBAs:
- Additional HBAs = More investment & more PCI-E slots to be consumed, might require larger servers
- Storage will appear as multiple drives instead of a single storage unit, increases complexity of storage management
- If you reach the limit of your server’s capacity (CPU, RAM, network etc), you won’t receive any extra benefits that rely upon the constrained resource.
To underline all this, what does your Veeam topology look like currently?
I recall a time where I simplified a customer’s architecture down dramatically because they wanted to create a virtual proxy per ESXi host for hot-add mode, and then add a bunch of gateway roles to these proxies so they could also connect via dedicated ESXi NICs to a bunch of NAS devices.
I proposed 2x Proxy/Repository servers, with decent RAID controllers, 80% populated with disks in a RAID 60 configuration, and upgraded NICs to ensure high-speed connectivity to the hosts & back-end storage. It worked out cheaper to do this vs all the extra NAS’ that were going to be purchased, and because we could then leverage ReFS/XFS storage efficiency (‘Fast Clone’) the amount of storage required dramatically dropped.
Wow! I think Michael has covered this one not much else to add.