Skip to main content

As a service provider we regularly have customers with unstable internet connections for some or all of their devices, some on slow DSL connections, some on cellular hotspots, some that have high speed connections that still fail from time to time. 

In my experience this is one major place where Veeam fails miserably when compared to other software, it doesn’t handle poor connections very well. I figure this is mostly another aspect of Veeam that’s leftover from the original “in-house” design Veeam originally had for backing up VMWare systems locally, but now that it’s more widely used by service providers backing up over the internet, it doesn’t seem to work very well in my experience.

I’m curious if other people have had similar issues and what you may have done to resolve it. I’ll admit that I’m really tired of having to implement more tweaks and workarounds and fudging with registry keys and configuration files in order to make Veeam work when I know I could just be using other software that would “just work”, but that’s not my decision to make, so I thought I’d ask if anyone has any general, or specific, advice on the matter?

Before someone says “Have you asked support about it?”, I have and Veeam’s support has been their typical unhelpful selves on the matter and suggested I should try a different router or a different ISP. Which I’m not even going to consider telling my customers to do when supposedly (and as I would expect of a backup software) there are no specific infrastructure requirements to use Veeam.

Thanks for sharing your opinion, it’s a nice change from what I usually get from Veeam that I’m the only customer who thinks this is a problem and no one at Veeam thought it was either. (I assume either Veeam makes up that answer because they don’t want to fix things or other people do what I would have if I were in the position to make the decision and just switched to other software that doesn’t have these sorts of issues.)

As far as Object Storage vs standard VCC repository storage, I’m assuming the only reason it would be different at all would just be due to the presence or absence of a block-by-block catalog of the data, which I imagine is present for Object Storage backups, but I don’t know exactly how it works in the back-end. I would think though, if anything, it would be easy enough to add for VCC repositories, since Veeam controls the server-side software (the VCC server) so the server can handle things like indexing and cataloging the data without the customer’s infrastructure (agents, VBR servers, anything) necessarily needing to be involved.

Just my thought though, if it works that would be good enough for me, I do think Veeam’s dev team has a tendency to way overcomplicate basic functionality (which in turn leads to things breaking more often and taking more effort to fix or add capabilities to), but as long as it works I don’t care too much how it’s done.


There are some very valid points here.  Personally, while the Veeam Agent was great for on-premise, and it works okay for cloud, I think there’s a lot of room for improvement on the agent now with the Cloud Connect backups or direct to object storage in general.  My recommendation is for now to submit this in the R&D forums as a feature request, but it’s something I can bring up as well at the next Veeam 100 Summit in October.  As to your points, really it’s both.  There needs to be a decent internet connection if you’re backing up to the internet, but also the recovery/resume process need to be better as well.  I think with object storage, it maybe would be easier to look at the most recent blocks that were uploaded and resume from there (I could be wrong), but I would suspect that’s harder when using a VCC repository.  Either way, I agree there’s certainly some room for improvement here.


I have considered splitting up some large jobs into multiple smaller ones, and for some larger servers I’ve already done that, but every time I have to do that there’s another tweak/workaround to making Veeam work. Similarly it’s been suggested multiple times to me that I can resolve the “Veeam doesn’t retry a job if the computer restarts” issue by using a Scheduled Task in Windows to run a script to start the job after the computer starts up. However (for that scenario) it’s another highly inconvenient thing to set up and manage, as well as that’s not technically a “retry” as Veeam sees it, so it won’t “resume” the last backup, it would delete the last backup and start over because Veeam thinks partial backups are useless.

Ultimately I suppose the main thing is that the more tweaks and workarounds and unusual settings I need to have in order to make Veeam work is growing significantly, and I know from experience using other applications that that’s just not necessary for typical uses with other software. Acronis for instance doesn’t have the poor design of deleting partial backups, Acronis knows some data is more useful than no data, Acronis has more complete remote management capabilities that would allow me to manage things like Scheduled Tasks if I wanted to, but it wouldn’t be necessary for that scenario since the problem the Scheduled Task was to resolve wouldn’t have been a problem in the first place. 

Just as a couple examples. 

And for anyone saying “the customer’s internet just needs to be better if they want to back up to the cloud”. That literally goes against Veeam’s entire marketing of “just works with whatever hardware you have”. I’m not saying Veeam should somehow just never lose the connection, that’s going to happen, in any scenario. I would never assume in any environment, no matter how expensive or premium or well-setup, that it’s going to never ever fail. The issue comes down to how the software handles connection problems.

For comparison to the “transport 1,000,000 gallons of water with a busted pipe” my thinking is, yes, you can. It will take longer than if you had a good pipe, but you can still get 1,000,000 gallons of water to the destination eventually. This is assuming “busted pipe” is not the same as “completely disconnected pipe”, obviously if the connection was completely non-existent that’s a whole other scenario.

With Veeam, it seems to just give up, at least under some circumstances, and then to make that worse, the next time it runs it decides whatever it transferred before is now useless, deletes it, and transfers the same unmodified data over again, which drastically worsens the problems because then the computer keeps trying again and again to transfer the same data and never finishes anything. This is the equivalent of: we tried to transport 1,000,000 gallons of water and only half made it, well let’s dump out that half and try again from the beginning. 

For direct comparison to my preferred (from past experience) software, Acronis, it just handles failures better, retries more consistently when there’s a connection problem, doesn’t delete a partial backup and transfer the same data over and over again. Acronis is more the equivalent of: we tried to transport 1,000,000 gallons of water, only half made it, let’s send 500,000 gallons again. It’s literally just how the software handles it, and Veeam doesn’t handle it well in my experience.


I have a new client that I am STRUGGLING with to get VCC backups direct to Wasabi.  Their internet connection is only a 25Mb upload, and they have employees logging into their VPN/Terminal server from around the world.  We’ve setup QOS policies so that backups at least don’t affect the VPN, but to top it off, their file server is 1.4TB.  Backups are MISERABLY slow when I can get the initial level-0 seed to complete.  I’ve twice pushed to the file server through without a successful completion.  I originally advised the client that they should have storage on-premise to get things started out for backups but they wanted no CapEx, so it’s BaaS only.  Fortunately, I found out this morning that they are looking to upgrade to a fiber line with much more throughput because the connectivity issues through their business cable modem connection just isn’t cutting it.  I’ve been trying to get a good backup for at least 3 weeks or more.  I’m now trying to break up the backup of the file server at least into separate jobs separated by volume or even separating by folders to that I can at least get some of the backups out to the object storage and completed, but I’ve quickly learned my lesson to make sure the client has a sufficient internet connection to support backups direct to object. 

I do feel like there are a few deficiencies in the Veeam Agent for Windows when used in this manner as you noted, mainly around setting throttling schedules and dealing with poor connections.  It works great to local repositories, or to the cloud with sufficient internet, but struggles with poor connections. 

In all, I’ve encountered the same issues as you, and for now, the only recommendation I have is to upgrade the ISP if possible and look at using a VBR server on-premise rather than just using the Veeam Agent.  In theory, you can get creative with the backup jobs, but I don’t have success yet to state this as fact but I am hopeful.  Even with some great design changes in the Agent, poor internet connections are going to cause an issue….VBR just handles it better than the Veeam Agent can.


As a service provider we regularly have customers with unstable internet connections for some or all of their devices, some on slow DSL connections, some on cellular hotspots, some that have high speed connections that still fail from time to time. 

 

Hi @BackupBytesTim

Please forgive me if I'm going to be rude in this explanation but quoted paragraph is how I say: “Can I transport 1.000.000 litres of water with a busted pipe?”. 

When clients wants to use a cloud service, they need to have a stable connection. During my designs I collect some informations that permit me to calculate internet bandwidth requested in according to RPO. 

For me your troubleshooting vision is very wrong. 

My 2c

 


Are your questions around Veeam Cloud Connect?  There is a way to set up the GWs using DNS to have them load balance and reconnect customers.  We don’t typically have this issue as our clients tend to have fairly decent ISP connections.


Comment