Skip to main content

Jobs travados Agent for Solaris


rfsantos
Forum|alt.badge.img+1

Boa tarde pessoal tudo bem? Espero que sim.

 

Venho compartilhar um caso que nos deu bastante dor de cabeça, e finalmente conseguimos resolver.

 

Ao iniciar um job de backup de alguns servidores Solaris configurados via Policy, Os dados eram incrementados, porém em determinado momento, não eram copulados e a sessão ficava “travada”.

Pelo lado do VBR, não havia indícios de falhas/alertas

 

Pelo log anexado ao /var/log/veeam/Backup/Session_, identificamos o seguinte comportamento:

 

[08.01.2025 14:02:20.184] <      22> vmb    |  pex: 3229026850046/3229026850046/1499056647566/1729970202480/399583889261 / 44/18/5/0/36/72
[08.01.2025 14:02:20.184] <      22> vmb    | Session progress: 82%; processed: [3229026850046/3923059916566] read: [1729970202480], transfer: [399583889261] speed: 138368138 bottleneck: 44/18/0/72
[08.01.2025 14:02:30.202] <      22> vmb    |  pex: 3230803529109/3230803529109/1500833326629/1729970202480/399583889261 / 44/18/5/0/36/72
[08.01.2025 14:02:30.203] <      22> vmb    | Session progress: 82%; processed: [3230803529109/3923059916566] read: [1729970202480], transfer: [399583889261] speed: 138257356 bottleneck: 44/18/0/72
[08.01.2025 14:02:40.241] <      22> vmb    |  pex: 3232706744930/3232706744930/1502736540906/1729970204024/399810716974 / 44/17/5/0/36/72
[08.01.2025 14:02:40.241] <      22> vmb    | Session progress: 82%; processed: [3232706744930/3923059916566] read: [1729970204024], transfer: [399810716974] speed: 138146521 bottleneck: 44/17/0/72
[08.01.2025 14:02:50.381] <      22> vmb    |  pex: 3235304790100/3235304790100/1505332415044/1729972375056/399920152196 / 44/17/5/0/36/72
[08.01.2025 14:02:50.381] <      22> vmb    | Session progress: 82%; processed: [3235304790100/3923059916566] read: [1729972375056], transfer: [399920152196] speed: 138034934 bottleneck: 44/17/0/72
[08.01.2025 14:03:00.600] <      22> vmb    |  pex: 3235890469660/3235890469660/1505905841602/1729984628058/399986116939 / 44/17/5/0/36/72
[08.01.2025 14:03:00.600] <      22> vmb    | Session progress: 82%; processed: [3235890469660/3923059916566] read: [1729984628058], transfer: [399986116939] speed: 137923452 bottleneck: 44/17/0/72
[08.01.2025 14:03:02.797] <       5> vmb    | Lease keeper: sending keep-alive request.
[08.01.2025 14:10:32.817] <       5> vmb    | Lease keeper: sending keep-alive request.
[08.01.2025 14:18:02.844] <       5> vmb    | Lease keeper: sending keep-alive request.
[08.01.2025 14:25:32.872] <       5> vmb    | Lease keeper: sending keep-alive request.
[08.01.2025 14:33:02.901] <       5> vmb    | Lease keeper: sending keep-alive request.
[08.01.2025 14:40:32.930] <       5> vmb    | Lease keeper: sending keep-alive request.
[08.01.2025 14:48:02.956] <       5> vmb    | Lease keeper: sending keep-alive request.
[08.01.2025 14:55:32.988] <       5> vmb    | Lease keeper: sending keep-alive request.
[08.01.2025 15:03:03.017] <       5> vmb    | Lease keeper: sending keep-alive request.
[08.01.2025 15:10:33.058] <       5> vmb    | Lease keeper: sending keep-alive request.

Ao que tudo indica, de alguma forma, a comunicação entre servidores era perdida e o “sending keep-alive request” mantinha a sessão de backup ainda ativa.

 

A solução foi imposta configurando os seguintes parâmetros ao /etc/veeam/veeam.ini no servidor Solaris (Parâmetros à serem incluídos, MANTENDO os mesmos já registrados):

 

[filelevel]

StorageDeviceSize = 3145728

[backup]

useAcceleratedBackupAlgorithm = false

[connectionSecurity]
vcfgConnectionAttempts = 60

vcfgReconnectInterval = 1



[job]

retriesCount = 3
retriesInterval = 600


[reconnects]

enabled = true

overallTimeout = 1800000

 

Ao salvar, reiniciar os serviços do agente

svcadm restart veeamsvc

 

Lembrando que essas configurações foram testadas e aprovadas na versão do agente for Solaris  4.5.0.1616, a última disponível até o presente momento para o Veeam 12.2

 

Espero ter ajudado!

0 comments

Be the first to comment!

Comment