Skip to main content

VBR v13 PowerShell + Tape: Why Your Reports Suddenly Fail (and How to Fix Them)

  • January 27, 2026
  • 1 comment
  • 50 views

Michael Melter
Forum|alt.badge.img+12

TL;DR

 

With Veeam Backup & Replication v13, every PowerShell cmdlet now goes through the same service layer used by the REST API and Web UI. That layer enforces request throttling. The tape cmdlets are extremely chatty, so when scripts call them in quick succession (as most reporting scripts do), you’ll hit TooManyRequests.


Fix summary: Wrap tape calls with a retry/backoff helper, prefetch once (no calls inside loops), and serialize tape inventory access (don’t overlap reports, rescans, and tape jobs).

 

 

What Changed in v13?

 

Veeam v13 moved PowerShell off the old .NET Framework RPC and onto .NET + a service backend shared with REST and the Web UI. The upside: one consistent platform. The downside: you now inherit API-style throttling that didn’t exist in v12.

  • Symptom: Import-Module works, RBAC is fine, the first few tape calls may succeed, but bulk/sequential tape calls fail with TooManyRequests.
  • Important: This is rate limiting, not permissions. Being Local Admin + Veeam Admin doesn’t bypass it because throttling happens after successful authentication.

Who’s affected?

  • Anyone running tape inventory/reporting at scale
  • Larger environments with many pools/vaults/tapes/drives
  • Popular monitoring/reporting scripts — e.g., Marco Horstmann’s MyVeeamReport — may fail out of the box on v13 because of these new limits

Why tape cmdlets fail first

The following are the worst offenders:

  • Get-VBRTapeMediaPool
  • Get-VBRTapeVault
  • Get-VBRTapeMedium
  • Get-VBRTapeLibrary
  • Get-VBRTapeDrive

Each of these tends to expand objects, hit multiple inventory endpoints, and recursively pull state/relationships. Calling them back-to-back—as most reports do—trips the throttle almost instantly.

 

 

Reproducing the Problem (Quick)

 

Even a simple loop will often trigger it in v13:

1..20 | ForEach-Object { Get-VBRTapeMedium | Out-Null }
# Sooner than you expect: TooManyRequests

You’ll see exceptions mentioning 429 or TooManyRequests. That’s the service throttle doing its job.

 

 

The Fix (Required in v13)

 

1. Add throttling / backoff around tape calls

Drop-in helper function:

function Invoke-VeeamSafe {
param (
[Parameter(Mandatory=$true)]
[scriptblock]$Command,
[int]$Retry = 5,
[int]$DelaySeconds = 2
)

for ($i = 1; $i -le $Retry; $i++) {
try {
return & $Command
} catch {
$msg = $_.Exception.Message
if ($msg -match 'TooManyRequests|429') {
# Linear backoff; you may switch to exponential if needed
Start-Sleep -Seconds ($DelaySeconds * $i)
} else {
throw
}
}
}

throw "Veeam API throttling not cleared after $Retry attempts."
}

Then replace direct tape calls:

$mediaPools  = Invoke-VeeamSafe { Get-VBRTapeMediaPool }
$mediaVaults = Invoke-VeeamSafe { Get-VBRTapeVault }
$mediaTapes = Invoke-VeeamSafe { Get-VBRTapeMedium }
$mediaLibs = Invoke-VeeamSafe { Get-VBRTapeLibrary }
$mediaDrives = Invoke-VeeamSafe { Get-VBRTapeDrive }

This simple wrapper with backoff stops the flapping in v13.

Tip: If you still hit limits in very large environments, increase -Retry and -DelaySeconds or switch to exponential backoff: Start-Sleep -Seconds ([int][math]::Pow(2,$i))

 

2. Avoid repeated calls inside loops (critical)

 

Bad (v12 style):

foreach ($pool in Get-VBRTapeMediaPool) {
$tapes = Get-VBRTapeMedium | Where-Object { $_.MediaPoolId -eq $pool.Id }
}

 

Good (v13-safe):

$allPools = Invoke-VeeamSafe { Get-VBRTapeMediaPool }
$allTapes = Invoke-VeeamSafe { Get-VBRTapeMedium }

foreach ($pool in $allPools) {
$tapes = $allTapes | Where-Object { $_.MediaPoolId -eq $pool.Id }
}

 

3. Serialize tape inventory access

 

Don’t run the following at the same time:

  • Tape reports / exports
  • Tape jobs
  • Inventory rescans

v13 throttling is enforced per service, not per user session. Parallelizing tape-heavy tasks won’t go faster; it will trigger throttling sooner.

 

 

Quick Patch Example for Reporting Scripts

 

If your script (e.g., MyVeeamReport) gathers tape details, add the helper function near the top and swap direct calls. For example:

Before:

$Pools  = Get-VBRTapeMediaPool
$Vaults = Get-VBRTapeVault
$Tapes = Get-VBRTapeMedium
$Libs = Get-VBRTapeLibrary
$Drives = Get-VBRTapeDrive

 

After (v13-safe):

# Add Invoke-VeeamSafe helper first (see above)

$Pools = Invoke-VeeamSafe { Get-VBRTapeMediaPool }
$Vaults = Invoke-VeeamSafe { Get-VBRTapeVault }
$Tapes = Invoke-VeeamSafe { Get-VBRTapeMedium }
$Libs = Invoke-VeeamSafe { Get-VBRTapeLibrary }
$Drives = Invoke-VeeamSafe { Get-VBRTapeDrive }

And if you filter or correlate, do it in-memory — don’t call the cmdlet again inside loops.

 

 

FAQ

 

“We’re Local Admins and Veeam Admins—why do we still get 429?”
Because RBAC is evaluated before the rate limiter, and the throttle is applied after successful authentication. This is not a permissions problem; it’s service-side rate limiting.

“Can I disable throttling?”
Not AFAIK. The supported approach is to implement backoff/retry and reduce unnecessary calls by prefetching.

“Does switching to PowerShell 5 vs 7 change this?”
No. The behavior comes from the service backend, not the host PowerShell version.

“Why did this work in v12?”
v12 PowerShell used a different tech path and did not enforce the same service-layer throttling that v13 now applies.

 

 

Operational Tips

 

  • Schedule tape reports outside of tape jobs / rescans.
  • Batch your inventory requests — don’t interleave with live tape operations.
  • If your environment is very large, start with -Retry 8 -DelaySeconds 3 on the helper.
  • Log when you hit throttling (e.g., write a warning) so you can tune backoff if needed.

 

 

Impact on Community Scripts (and Thanks)

 

A lot of us - like myself - rely on community tools to keep tape estates sane. In v13, scripts like Marco Horstmann’s excellent MyVeeamReport can run into TooManyRequests purely because v13 enforces service throttling.


Adding the backoff helper + prefetch pattern restores stability without rewriting the world. I just had the situation in a large customer environment with heavy tape usage and solved it there in the described way.

 

 

Closing

 

v13’s unified service layer is a good architectural move — but it means we must code like API citizens: retry with backoff, fetch once, and avoid noisy loops. Make those small changes, and your tape reports will be boring again—in the best possible way. 😉

1 comment

Chris.Childerhose
Forum|alt.badge.img+21
  • Veeam Legend, Veeam Vanguard
  • January 27, 2026

Excellent article and script Michael.  Tape is such a weird thing but anything that makes it easier is great.