Solved

get maximum restore performance from azure blob storage to azure vm


Userlevel 3

I’m testing restore process from azure blob storage to azure vm. Here is my current configuration:

 

VBR11 installed from market place : Standard_F4s_v2

Azure proxy VM :Standard_F4s_v2

Azure blob storage where backup of vm sits: Premium

I restore vm to another Standard_F4s_v2 and maximum speed what I get is… 150MB/s

 

Am I missing some configuration or is it an azure limitation?

icon

Best answer by MicoolPaul 2 October 2022, 10:10

View original

6 comments

Userlevel 5
Badge +2

I guess it’s the Azure limitation for the selected VM SKU: On this page, “Standard_F4s_v2” is listed with a disk throughput between 95 and 200 MB/s.

Userlevel 7
Badge +20

I guess it’s the Azure limitation for the selected VM SKU: On this page, “Standard_F4s_v2” is listed with a disk throughput between 95 and 200 MB/s.

This would more than likely be the reason for sure.

Userlevel 3

I’m still testing and results are bad.

 

Standard_F64s_v2 - vbr11
Standard_F48s_v2 - vm

 

with this configuration I can get only to 250-280MB. Which vm size should I choose to double the performance?

Userlevel 7
Badge +20

Hi,

 

We’ve got a lot to unpack here.

 

What region are these resources based in? - This can have costs & throughput restrictions if going cross-region.

 

What redundancy option have you got selected for the storage account(s)? (Both source & destination) - This could impact the write speed of the restore, different redundancy options offer differing throughputs

Are you trying to create the Azure VM in the same storage account or another? - Separate storage accounts should be used otherwise we’ll hit storage account level limits

What block size was used for your backup? (As I’m reading it, this is a VBR backup you’re attempting to restore to Azure) - I’ve got two comments on this. Firstly, premium is optimised for smaller, kilobyte level transactions (https://learn.microsoft.com/en-us/azure/storage/blobs/scalability-targets-premium-block-blobs), I wouldn’t suggest premium in general unless you’ve validated your workload needs it and will get a benefit. As for if you’re restoring to standard, by default you’re capped at 20k requests per second per storage account, if your blocks were 1MB, then you’ve got a very high ceiling to reach, but if you used 256KB, your throughput ceiling will be lower, though you can request an increase in throughput via Azure support. Details here: https://learn.microsoft.com/en-us/azure/storage/common/scalability-targets-standard-account


Let’s start with this and work back. There might be further questions from your responses

Userlevel 3

separate source and destination storages are : Locally-redundant storage (LRS).

block size is 1MB.

I’ve tried to restore to managed or unmanaged premium SSD disks.

 

I’ve just made azcopy benchmark from VBR11 in Azure to that container  and the result is 2500 MB/s… so I’m a bit confused why VBR gets only 250-300MB/s

Userlevel 3

I made another approach with below configuration

VBR11 in Azure : Standard_F4s_v2

Azure Proxy : Standard_F48s_v2

VM size to recover : Standard_F48s_v2

2 separate “standard” LRS storage accounts in the same area East US

1MB block size

Tested with unmanaged disks or managed premium ssd disks

 

VM to restore consists of 2 disks

  • Ide0-0\APP01.vhdx (250 GB)
  • Scsi0-0\DATA1.vhdx (500 GB)

The highest speed I got is :

Restoring Ide0-0\APP01.vhdx (250 GB) : 79.4 GB restored at 257 MB/s
Restoring Scsi0-0\DATA1.vhdx (500 GB) : 125.1 GB restored at 324 MB/s

 

Which is really slow comparing to capabilities of Standard_F48s_v2.

 

Is there anything what I can change and test? I think I checked all of the scenarios.

Comment