Replies posted by ddomask
Ah, okay now I do get you; indeed, I was being too literal with the term “Task” as it has a specific internal meaning and I indeed got hooked on that.And indeed, a search parameter in the section you want would be useful; as for sorting by Date, you can actually select all jobs of same type with shift+click and right-click and it will produce a date-sorted report of each session.I get it’s some manual clicking, but would it help here? Obviously there’s also Powershell where it’s quite simple, but I’m guessing you want to avoid that.
Well, no I get you, but right now there’s not a specific task filter, but any job with a task in a warning/failed state will also be flagged similarly, so more or less it does the same, and then you can just order based on status; I realize it’s not a _full_ filter, but it’s probably the fastest way to still sort items.But I get your point that you want to be able to have the UI reflect tasks, not just job level items, even if job level indicates whether specific tasks need action.
@chetantikandar,Can you post where you got such limitations from? These are not restrictions for Scale-Out Repository (especially the SMB/NFS part -- you can absolutely mix and match, just don’t recommend it). The current limitations are as per here: https://helpcenter.veeam.com/docs/backup/vsphere/limitations-for-sobr.html?ver=110 @StevenMSTech, regrettably a support case is probably the best way to go. I have a strong suspicion it’s about the spaces in the name which is not supported on SOBR, but when adding it will try to automatically correct this a bit, but in various edge-cases there is a chance it might not work, but it’s best to open a support case with the logs. Reproduce the issue, note the date and time of the test, and export logs as per https://veeam.com/kb1832. Use the 3rd radio option and select the VBR server itself for export. This should be enough to start the investigation -- depending on the circumstances, some more information might be needed.
Let’s start small: ability to filter in Console, and I mean filter not search. Like only for errors or warning. Search the content of task logs in Console, instead of running powershell scripts to search my 60+GB of logs when error xyz occured last time. @Ralf doesn’t Views already do the first one? :) https://helpcenter.veeam.com/docs/backup/vsphere/job_filter.html?ver=110 You can just set a custom view that only has errors and warnings. As for the log diving part, I get what you’re saying, but I guess knowing the log structure probably would be more useful and make the searching a lot easier. Not sure if a basic log diving course is something people are interested in, but it's definitely a subject that I can talk about at length.(P.S., I hope you’re using ripgrep instead of native powershell text tools :D Powershell itself is not built well for text munging, but ripgrep can be installed via chocolatey on powershell and gives you grep superpowers even in Powershell)
FWIW, initially the new naming schema had the random hash salt first and date last, but this broke sorting on basically every OS, so based on feedback, it was switched so sorts work.If there’s a specific use-case for the salt being logical in some way, don’t hesitate to share it, definitely it will be considered (not necessarily implemented though!). Just need to know how you’re using it is all, as the reason for checking on the salt isn’t so clear for me immediately, but maybe I’m just not being creative enough.
@JMeixner Doesn’t matter :) Media Set is purely a _logical construction_. If your goal is to make recovery from your vault as simple as possible, then yes, not having a full would negate this, but in terms of functionality, it will work just fine.To elaborate, let me repeat how I explain it to Engineers during training:Think back to when you were a kid; if you were a dork like me, you recorded your saturday morning cartoons on VHS so you could watch them again in the future when boring afternoon shows were on.Now, you have a bit of an issue with organization here cause you’re a kid and don’t have a lot of money, so you decide to just buy tapes as you need them/can and fill them to the end. This means each tape has multiple Saturdays and really disconnected lists of cartoons. You use your tapes effectively, but it’s very hard to find specific episodes/Saturdays you want to watch again.Or, you have enough money somehow that you can buy as many tapes as you want, and each Saturday gets it
@Leo A. To answer your question, it’s just a quirk of how Windows handles DST and the LastModifiedDate attribute. You can see here it’s not unique to Veeam it’s just a peculiarity from Windows: https://qa.social.msdn.microsoft.com/Forums/vstudio/en-US/333e4402-291b-4dd1-aa9d-2df13cdb50e6/problem-with-file-dates-created-during-daylight-savings-time?forum=netfxbcl(tldr: dates and time are hard :) Microsoft agrees)Long story short, it’s solvable, and there are plans to address it, but probably not until at least v12 (Disclaimer: this is not saying it _will_ be fixed in v12, just with the major overhaul to the file to tape engine, I don’t think there would be a change until then)
Hi @SwissAndreas, please contact Veeam Support for the script.Please note that this will not retroactively fix anything, it will just prevent the next run (if it’s affected) from doing the undesired behavior. You would need to rollback the database using a Configuration Backup prior to the affected job run in order to apply the script in a meaningful way.File to tape is getting a major overhaul in v12, and I’m hopeful the new engine should avoid it but I’m not sure on the status of the fix. It’s not about MSSQL vs other instances, it’s just about the peculiarities with DST. (lifehack: just “move” your computer to a locale without DST ;) Joking of course, but it would likely work)
@chinchilla, No, a new media set does not require a full backup. Media sets are a logical way of organizing your tapes, it has little (if anything) to do with the data on the tape. So if you’re doing increments, it will continue to do increments, but on the new media set. Now, for planning purposes, if you are going to be making new media sets, there might be some logic in running a full when the media set switches so that recovery is much more simple. With your current media set settings, it shouldn’t force a change except for those situations listed above.So, it should be fine, but do consider that human error might result in a new media set, and it might be easier management wise to run a full after that so the number of tapes to retrieve from vault is more controllable. But this is purely for your convenience, it’s not a requirement :) So your call.
@Chris.Childerhose I can take a crack at this as I mostly have a report ready for this on a normal VBR server, just need to see if there are any “considerations” for VCC.Can you share an example output of what you want to see in a report? (e.g., just write it out by hand)I’m having a bit of trouble imagining what your business rules would be, so before I commit too much time I just want to make sure I’m on the same page as you.Most of the data _likely_ is in CTaskSession objects from Get-VBRTaskSession, but if you can share what you imagine the end result looking like, this shouldn’t take too long.
Hi all, Peeked at the backend, we just run updatedb -V and parse for mlocate in the version output. Likely that’s why it’s a bit grumpy even if you link to plocate since naturally there’s no string for mlocate.I’m sorry, but I’m not sure there will be a workaround on this until likely v12 (subject to change of course, I’m not sure if this is addressed or not).
Just some input -- the 100% best improvement you could do would be to stick a Gateway/Repository server next to the QNAP on the remote site and connect the QNAP via:iscsi mount to the server as a Repository Server. You can even get away with just Windows 10/11 here as long as it’s sized appropriately for the concurrent tasks: https://helpcenter.veeam.com/docs/backup/vsphere/system_requirements.html?ver=110#backup-repository-server SMB/NFS share that mounts to the gateway server. Cool advantage here is you can use Linux for an NFS gateway and save a Windows license costThe reason having a Repository Server/Gateway on the DR site helps is that the Veeam datamover agents get deployed there, so the topology changes. I’m guessing previously with NFS/SMB the gateway was on the production site, right? So you have the WAN in-between and any sensitivity there breaks the connection because of how these protocols work. With the datamover agent, Veeam has resiliency built in, doubly so for Backup
Hey @Tommy Armando Few points to help you design your strategy. Consider total volume of data that needs to be moved to tape and your LTO generation -- this will heavily determine your tape usage and tape-out strategy To most efficiently use tapes, typically it’s recommended to set Media Set creation to “Do not create, always continue” so that the previous tape is appended. You can force new media set creation in a few ways: https://helpcenter.veeam.com/docs/backup/vsphere/tape_media_sets.html?ver=110 If you’re offlining the previous tape (vaulting it I suppose), you can manually force your media set creation that way to get distinct media sets that correlate to your tape rotation. This might be a great fit for youThe big question is do you need the incrementals from your source jobs or are you fine with Archival Points? If it’s the latter, then a GFS media pool ought suit you pretty nicely and you can just set Weekly points for your tape-out goals. GFS does have a Daily media set, bu
@VeeamRBL Some notes on Azure File Shares. To this date, it’s still not clear to me if Azure File Shares actually “chunk out” the data to Blob or not; Microsoft lists that it keeps the data on a storage account so I assume this is the case, but it’s a black box technology that makes it hard to say supported vs unsupported. If the data actually gets chunked to blobs, it’s not supported as the only way to officially use Blob storage is via Capacity Tier (in v12, you can do direct to Azure Blob, which is preferable) In actual environments I’ve reviewed with Azure File Shares, the performance has been awful. Check the file limits here: https://docs.microsoft.com/en-us/azure/storage/files/storage-files-scale-targets#file-scale-targets Don’t get mislead by IOPS, they are a pure fantasy term for all intents and purposes, and only real-world baseline tests will help you predict actual performance. Even within the Azure data-center, you’re restricted by load and you still might have cross-regi
All,A few notes on Validator versus Health CheckFunctionally, both work about the same in terms of what they do Health Check is a lot less manual As of v11a, Health Check has a substantial speed boost over Validator as Health Check received an async engine in v11a (5 streams instead of 1) so it should almost always be most performant (I’ve seen cases where new Health Check was so performant it brought pretty beastly storage systems to their knees while the Health Check ran)In general, the main difference is what you’re trying to actually prove. Running both “once” is not enough, because the window of time you’re checking for the stability of the backup is quite short, and the only conclusion you can derive safely from this is:“At the time I ran the validation, it did not detect corruption”Validator has the flexibility of checking specific points at will, while Health Check has the automation and speed advantages. I would put Vaildator into the category of on-demand checks, while Health
@ddomask just for testing purposes / knowledge. Yeah, what you say is right, but in a case of disaster recovery with what I use to start a piece of hardware shouldn’t matter. If we think about a backup file, it should be not related to OS. VBR confirm it works. I even found this thread on forum https://forums.veeam.com/veeam-agent-for-windows-f33/restore-from-encrypted-backup-unsupported-vbm-format-t73026.html Hey, it’s not a criticism to Veeam, I absolutely love it, but I don’t get why this limitation exists. But it does matter :) As you see. Backup data isn’t just “dumb” data, there’s a lot of OS specific metadata to tell us how to restore the blocks. That Thread doesn’t really tell me it works tbh, it’s just about the same error message. Instant Recovery works because we process the backup metadata and depending on the OS (and it’s configuration), we do a lot of black magic to make it work ;)Not taking any of it as a criticism, just trying to get you in the right direction :) So y
Hi @marcofabbri , Do I get it right you made a Windows based Recovery Media and are trying to use it to recovery a Linux backup?This won’t work; the Windows Recovery Media (WRM) is based on the Windows Recovery Environment, and as is such it doesn’t know what to do with Linux as it’s basically Windows PRE (Pre-recovery environment).Similarly, Linux backups use a specific linux boot appliance (probably wrong word but close enough) and it expects to have Linux.As far as I know it’s not a cross-recovery boot loader, so you should just from the Veeam Server make a linux recovery media and you’ll be good to go.Is the goal just to reduce the number of Recovery Media to maintain?
All, by default both Veeam and all public cloud providers use TLS 1.2 for communication; unless you manually set a lower level on the Azure (or AWS/whatever) side, it will be using TLS1.2 out of the box without a need to change anything. I would not recommend changing AzureConcurrentTaskLimit unless specified by Support. @Anandu if the issue is intermittent (that is, it runs fine for awhile then randomly throws this), please open a Support case and refer to issue 350539. You must include a log export from the VBR server also: https://veeam.com/kb1832. Use the 3rd option and select the VBR server, and upload the logs to the case for review.
I know just 2 situations where I needed direct SQL access to get the work done. One for getting the absolute path of backup files located in SOBR, the second for querying backup files on tape. For all other I highly recommend not to use any direct SQL access - especially for changing something. This is something Veeam support should do. This one isn’t entirely true, but I agree it’s not clear if you don’t know how SOBR paths works. SOBR uses relative paths as a given backup file may be on any extent; to avoid constant lookups to the infra/DB, the paths are stored as relative. I’ll share what’s in my private notes document for engineers and an example:#Storage Paths (Backup paths)* Scale-out Repositories (SOBR) and non-SOBR repos have different paths * Non-SOBR use full path on system and can be retrieved with just the path for the Storage Object * SOBR uses relative paths for the Storage and need to be constructed with: * Extent.GetPath() * MetaPath property† * Storage Nam
Just my input as a support person...I don’t.Seriously, feeds and aggregators just add another task list which I don’t need :) When we understand how technology tends to work and when you understand the basics behind the scenes of most tech, it’s a lot easier to stay on top of it as then new tech announcements are less about the technology, and more about the nuances and how it differs from the original project/RFC/whatever.We deal with this in Support a ton as every environment we face is truly unique; one case entire environment is a few Windows/Linux servers backed up by agents. Next case you’re dealing with a VMware environment across 4 continents and 2000 VMs in the job and some custom built storage array to cram it all into.There’s just too much to constantly chase every new article or tech announcement (and in my opinion, most such announcements are too “fluffy” to be of use anyways) When I was on Support Tier 1, I ingested the information in the following way:First give myself
Login to the community
Log in with your Veeam account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.