“The requested operation could not be completed due to a file system limitation” what file system is your storage device? I’m guessing it’s not NTFS or ReFS?
Both source and destination are NTFS.
How big is the allocation size on them both? And how big is the file when it is failing?
Are you using Data Depuplication at all @dclaar ?
If so, see this KB: https://www.veeam.com/kb1893
This error can still happen with NTFS if the drive is using a small cluster size, like 4 KB. Veeam recommends 64 KB clusters for better performance with large backup files. You can check the cluster size using fsutil fsinfo ntfsinfo E:. If it shows 4096, consider reformatting the drive with 64 KB clusters. Also make sure there's enough free space, no antivirus interference, and the drive is healthy.
To Matheus’s point...a good explanation for this was given 10yrs ago in a Forum’s post.
https://forums.veeam.com/veeam-backup-replication-f2/2012-r2-dedupe-issue-t23656.html
The .vbk file left behind is 816 GB, Both source (Data) and destination (2025-07) are 4096.
7/6/2025 3:10:14 PM :: Windows (C:) (476.3 GB) 322.1 GB read at 36 MB/s
7/6/2025 5:45:41 PM :: Data (D:) (1.8 TB) 635.9 GB read at 41 MB/s
Get-Volume | Format-List AllocationUnitSize, FileSystemLabel
AllocationUnitSize : 4096
FileSystemLabel : Data
AllocationUnitSize : 4096
FileSystemLabel : 2025-07
First try also ended at about the same size:
7/5/2025 8:03:13 PM :: Data (D:) (1.8 TB) 636.4 GB read at 43 MB/s
Since both source and destination volumes are NTFS with 4K allocation unit size, the file system limitation error might not be related to block size itself.
In similar cases, ive seen issues caused by underlying disk health, like bad sectors, antivirus interference, or third-party software locking the backup file.
It’s worth checking Windows Event Viewer for disk or system errors around the time of the failure, and try running a test backup to a different volume or external disk to isolate the issue.
Also check SMART health for the drive 
I don’t have dedup on, or even installed:
PS C:\WINDOWS\system32> Get-DedupStatus
Get-DedupStatus : The term 'Get-DedupStatus' is not recognized as the name of a cmdlet, function, script file, or operable program.
kb1893 seems to be a different error, about flushing file buffers.
The 10 yo post talks about running into issues above 1 TB, and I’m not there, and also the destination drive is otherwise empty, so I wouldn’t think that windows would necessarily break up the file for the heck of it, but it _is_ windows...
bI’ll try reformatting the backup drive with larger clusters to see if that works: Since the only thing there will be huge files, it shouldn’t waste too much space. Obviously, it will be a while before I can report back on those results!
I guess another option might be to backup the 2 drives separately?
For reference, on a different 3TB drive in the same external USB case and same allocation size, I was able to create a 1.2 TB file:
1,218,392,273,408 D_VOL-b003.spf
But obviously that’s completely different software, so that may not mean anything.
C:\WINDOWS\system32>wmic diskdrive get model,name,serialnumber,status
Model Name SerialNumber Status
Sabrent Disk Dev USB Device \\.\PHYSICALDRIVE2 0000WSDN5MV5 OK
Micron_1100_MTFDDAV512TBN \\.\PHYSICALDRIVE0 170815FFD7BC OK
ST2000LX001-1RG174 \\.\PHYSICALDRIVE1 WDZDJXY4 OK
Thanks for the help so far!
Sorry another question from me, how much space is consumed on your source drives? Wondering if VSS hasn’t got enough space to hold the snapshot
In Event logs, the only 2 NTFS errors are for drive F, and on the day before: This was the ISO for Veeam backup (which I installed before finding out that Agent was easier, and recommended).
In Administrative events, the Veeam Agent error is separated from any other error or warning by at least 30 minutes.
At 22:08:23, Veeam Agent posts info “'greatful' restore point has been created.”
At 22:08:28, Veeam Agent fails with the write error.
There are no hardware events.
They’re not that close to full, I think?
PS C:\WINDOWS\system32> get-PSDrive
Name Used (GB) Free (GB) Provider Root CurrentLocation
---- --------- --------- -------- ---- ---------------
Alias Alias
C 369.05 107.28 FileSystem C:\ WINDOWS\system32
Cert Certificate \
D 1361.48 501.41 FileSystem D:\
E 816.80 1977.72 FileSystem E:\
Oh. I may be an idiot.
Bringing up the backup job, I see that I have “Include external USB drives” checked.
That’s right in line with finishing D and starting to try and back up E … to E!
(I thought the box said “exclude”, but it says “include”)
Ohhh!!! I think you found your issue?…. 
Thank you to everyone for jumping in and helping!!!
No problem...just talking through it with others helps sometimes. Glad you got it sorted 
It was a great answer!
...just not the right one. With the box unchecked, it actually failed a bit sooner.
I’m going to try a different disk to rule out the media.
7/7/2025 5:31:23 PM :: Data (D:) (1.8 TB) 547.6 GB read at 31 MB/s
7/7/2025 10:31:26 PM :: Error: Shared memory connection was closed. Failed to upload disk. Skipped arguments: tEmulatedDiskSpec>]; Agent failed to process method {DataTransfer.SyncDisk}. Exception from server: The requested operation could not be completed due to a file system limitation Failed to write data to the file iE:\VeeamBackup\greatful\Job greatful\Job greatful2025-07-07T150232.vbk]. Failed to download disk '251db751-6938-460c-8b6d-c34dc909537e'
It worked!
I changed from a 3 TB Seagate Barracuda ST3000DM001 (date 13075, August 22, 2012) to a 3 TB Seagate Desktop HDD, also a ST3000DM001 (date 16523, June 27, 2016).
7/8/2025 9:39:36 AM :: Windows (C:) (476.3 GB) 316.4 GB read at 225 MB/s
7/8/2025 10:03:42 AM :: Data (D:) (1.8 TB) 1.1 TB read at 96 MB/s
In theory, these are the exact same disk, 7200 RPM, but recall that the old one was showing massively slower times:
7/6/2025 3:10:14 PM :: Windows (C:) (476.3 GB) 322.1 GB read at 36 MB/s
7/6/2025 5:45:41 PM :: Data (D:) (1.8 TB) 635.9 GB read at 41 MB/s
So I guess I need to buy a new backup disk! Note that SMART thinks the disk is fine, as does chkdsk, but clearly something is wrong with it.
Well...glad you now finally got it sorted @dclaar
Thanks for sharing what worked for you.