Skip to main content

Howdy, 

 

I am trying to crack why my application aware backups aren’t working as expected. I am trying to back up MS SQL server including transaction logs. In the statistics → details overview, there’s plenty of these errors: 

12/5/2024 2:10:27 PM :: Failed to connect to guest agent via GIP, failing over to guest runtime components through VIX via GIP
12/5/2024 2:10:51 PM :: Failed to inject guest runtime components via GIP, failing over to guest agent connection via GIP

For the backup job, on the app aware screen, the “test credentials” works well for both of my database servers (i.e. ends up with green circle with white checkbox), however the T-LOG backup is incredibly slow. 

The environment is as follows: 

  • Veeam server isn’t domain joined, database servers are. 
  • The databases in question operate in basic Always ON availability groups
  • The account used to connecting to the servers is both local admin and sysadmin on the SQL server
  • The user profile for the account used by veeam (the domain one) has profile created on both database servers (I logged on with this account)

Any hint where to look for clues is appreciated 👍

Check this post on the forums as it has some information that may help -(1) Failed to inject guest runtime components - R&D Forums

VMware tools not updated

GIP (Guest Interaction Proxy) connection to server being backed up

Etc.


Another KB on troubleshooting VIX - KB1788: Credentials Test or Job Fails when attempt to use VIX


Thanks Chris, I both articles surfaced in my search. After allowing TCP 2500 - 3300 from backup server (GIP and backup server is one machine) on the SQL servers, the GIP failures are now gone, the problem is in different phase, though. 

Most of the time, the log backup job is stuck on “waiting for log shipping server”. I have also allowed ports 2500-3300 in the different direction (from MS SQL server to Veeam) on the network firewall. Yet, the “waiting for log shipping server” is dominantly the longest operation in backup : 
 


The backup server and the guest VMs are in different subnets, but still in one LAN. 


Great to hear you got part of this working with the firewall ports.  Not great to hear about the other issue now with the log shipping server.  This is from the help documents about log shipping servers not sure if it will help - https://helpcenter.veeam.com/docs/backup/vsphere/sql_backup_log_shipping.html?ver=120


Great to hear you got part of this working with the firewall ports.  Not great to hear about the other issue now with the log shipping server.  This is from the help documents about log shipping servers not sure if it will help - https://helpcenter.veeam.com/docs/backup/vsphere/sql_backup_log_shipping.html?ver=120

Thank you for another interesting link, I have checked this as well before posting to the forum. What seems to achieve why I wrote here in the first place seem to be the TCP ports 2500 - 3300 enabled on the SQL server guest. Few hours after that and without any other change, I seen Veeam operations changing for the better - no long hangs on log shipping or any other activity, all servers in the log BCK job mostly in the state “pending” instead of “In Progress” as it was yesterday. 

Therefore - problem solved. 

Thank you!


Great to hear you got part of this working with the firewall ports.  Not great to hear about the other issue now with the log shipping server.  This is from the help documents about log shipping servers not sure if it will help - https://helpcenter.veeam.com/docs/backup/vsphere/sql_backup_log_shipping.html?ver=120

Thank you for another interesting link, I have checked this as well before posting to the forum. What seems to achieve why I wrote here in the first place seem to be the TCP ports 2500 - 3300 enabled on the SQL server guest. Few hours after that and without any other change, I seen Veeam operations changing for the better - no long hangs on log shipping or any other activity, all servers in the log BCK job mostly in the state “pending” instead of “In Progress” as it was yesterday. 

Therefore - problem solved. 

Thank you!

Great to hear that it is now solved with the help of the articles and opening the ports on the FW.  Always a nice thing on a Friday.  😋🤣


Comment