Join discussions around Veeam community projects, Veeam events, industry and technology news
- 1,191 Topics
- 11,711 Comments
Hi guys,first of all, hope this is not a duplicated post.I would like to discuss about best options for DR solutions of VMware on-prem environments.Working for a small-medium SP, I can use our datacenters as target of Veeam Cloud Connect replicas.Sometimes it is difficult to make a large amount of resources (cpu, ram, storage, bandwidth..) available to the customer in a short time.Furthermore, in my opinion it is not the best approach in terms of cost optimization and resource efficiency.One solution might be to use public clouds as a target, but using VMware the options are narrow.What do you think about it?
Hi all As you all know, Microsoft is deprecating basic authentication from the first of October.So if you are using basic authentication in VBO365, it will no longer work.Personally we never use basic authentication, only modern authentication.Only when the customer is using public folders we are activating the option legacy protocols.What will happen with the backups of public folders after the deprecation of basic authentication when using modern authentication with legacy protocols.I assume that the backups are still working, except for the public folders. Am I correct?If so, what is the solution, alternative?Migrating the public folder to a shared mailbox or sharepoint online, so a backup is than possible?What if the customer still wants to use public folders?I know that Microsoft wants to get rid of public folders and is now in this pushing this 🤔.I already read the article on the Veeam forum, but no real solution or feedback from Microsoft.Public folder backup after October 1st
Perhaps the Service Provider section of the R&D Forums is a better place to ask, but I figured I’d check here to see if anyone has an easy button for resetting MFA on an AD login specified by AD group for an Admin in the Service Provider Console. I reloaded my phone and the Duo app didn’t import my 3rd party MFA logins. Fortunately, I have other admin logins I can use.So far what I’ve gotten is that you have to apparently use the REST API (which seems silly to me if that’s the case, feature request coming up…..) because the user only exists in an AD group that was specified for access. While I’m not great with API’s, the the Swagger UI should make things easier. That said, I haven’t yet figured out how to authenticate with Swagger using an account that has MFA enabled. I might be able to get it once I get past that hump. I was hoping that if I disabled the MFA requirements, I could log in and reset MFA from the user access, but it appears to prompt for MFA once enrolled even if
After a long time fighting with it, I just disabled the ping test and things worked pretty good. It appears when the VM’s boot, unless you are watching the console and click OK to the network discovery, it will fail. If you do they work fine and stay up.They are all static IP, auto MAC address from VMware. I do notice SureBackup gives them a new MAC, which could be causing this, but I’m not about to change 100’s of VM’s to static MACs in VMware as it’s not even concurrent if I wanted too. To skip all the time wasted troubleshooting, It appears when the new MAC boots up, it was hitting the private network instead of the domain network firewall. Odd, because it’s still “on the domain” with the DC’s in my Veeam Lab. a GPO added allowing inbound from the specified networks of my machines to the private network as well as domain network has solved this issue and Ping test is now working, along with RDP and not having to click the Windows notification. That was a pain, but happy it is fi
When creating agents / policies in the VCSP console you cannot control aspects of the agents such as logging! It would be wise to allow the configuration of the agents, specifically the loggin from a central location otherwise, we constantly need to patch things and run all sorts of automations to make sure the logfiles are manageable.
Hi everybody, I’ve got a private instance (without support) of a veeam agent backup. Now the thing is that the latest VIB in the chain got deleted and now the backup starts with a VBK… I’ve tried to delete the backup job, created a new one and mapped the backup but still no difference. Also I’ve tried to remove the vbm file, but it still starts with a vbk. Has anyone got an idea how to continue with that chain?
anyone have experience with SOBR using a de-dupe device , then offloading to capacity tier ?so local backups landing in performance tier of SOBR, but then moved/copied to capacity tier, best practise is to disable Veeam inline de-dupe, so when moving/copying data to capacity tier, is de-dupe still disabled?
A data integrity checksum error occurred. Data in the file stream is corrupt. - Synology NAS, REFS, RDM
Had a client contact me this morning that is getting corruption errors on their backup copy jobs to the performance tier for their SOBR, but data appears to be making it out to their capacity tier, Wasabi, as far as they know (I didn’t check available restore points on that tier, but for sure the data is not sitting on the performance tier when I look at that filesystem).Background configuration is that they are using in both their primary and secondary datacenters a Synology NAS that is connected to their Windows Repo server as an RDM disk presented via ISCSI to their ESXI hosts, volume is formatted REFS, 64k blocks, the usual. I realize that NAS’s are less than desirable, and my standard procedure is now to no longer use REFS when using a NAS as the backing array, but this is where we’re at. I will point out that another copy job, not going to a SOBR but to the same NAS is copying data successfully. I can’t say for sure off the top of my head if it’s the same volume or if this is
Veeam has just published a new KB that details how to set up AWS PrivateLink for moving data to Capacity/Archive tiers. Check it out - KB4226: How to offload backup files to Capacity and Archive Tiers via AWS PrivateLink (veeam.com)
As we’re getting close to the end of the year, I’m curious if anyone has heard anything about a new VMCE exam. I passed the 2021 in Dec last yr. From my recollection, I thought the exam only lasts 1yr. But, if there’s not a new exam, or is severely close to being released at the end of the yr, isn’t that cutting it a bit too close for those of us who need/want to renew?Thanks.
Hello,our data backup policy is as following:incremental backup from monday to friday on a disk repository full backup every week on the same disk repository A second copy to tape is performed for all full data stored on the disk repositoryNow we only want to start a duplication to Tape of all full backup data that are only generated by the jobs started the 1st-2nd of each month. How can I achieve this data backup policy?Thanks for your help!
Hi all,we are running a Veeam Backup for Microsoft 365 environment. We are using NTFS volumes formatted with 4KB cluster size, according to Veeam best practices (s. https://bp.veeam.com/vbo/guide/buildconfig/proxy-repo.html). Every job has it’s own repository and multiple repositories reside on the same NTFS lun.Now we are running into the problem, that some luns reach 16TB size. Due to the cluster size of 4KB we are not able to expand further.We now need to migrate some of the repositories and I have two questions about that:Should we migrate to luns with a bigger cluster size? What are the side effects of that? What’s your preferred way to migrate? The best way I know is to copy the repo contents with robocopy, create a new repo pointing to the new location and then change backup job target to the new repo. Unfortunately that implicates stopping Veeam services while copying, because otherwise all of the jet dbs are locked… Is there a better way? Or will be in v6?Thank you for your in
We are around 2 weeks out from me starting the long journey to Prague for my first #veeam100summit.Covid spoiled the last couple of chances, so I am super excited my flights are booked!Trying to be brave and will likely regret the amount of work this is going to be but I thought for those not going or who aspire to apply for the vanguards or push to become a legend then this might give you an idea of what it is all about.So. what do you want to see?I would love your ideas for things you may want to see, people you may want to hear from, places or landmarks in Prague, food etcPing them below, ill try get as many covered as possible.See you for episode #1 in a wee while
For those of you that were looking for ways to convert physical to virtual or V2V the new VMware Standalone Converter 6.3 went to GA today along with the initial release of vSphere 8.You can find release notes here - VMware vCenter Converter Standalone 6.3 Release Notes
How can I identify VMs that were never properly configured for backups. Or somehow aren't being backed up at the frequency intended?
Create a knowledge graph with data from my Veeam backup servers in order to verify that backups were configured and running for intended VMs. For example: The data could be compared via query against data in your IT Asset Management that defines which machines are supposed to be protected.You may find it helpful to also have your VMware data within your graph
Here is something that I don’t see enough of. First, what is the fastest or average job you have? If you could follow it up with some hardware specs, or config design that would be super helpful too.If you can also add, what is the single best thing you have done to improve your Veeam performance? I average around 1GB on most jobs. I have it 2GB and the smaller incremental sometimes hit between 100-300MB but only for a few minutes. I feel I get decent performance but was wondering if anyone is hitting significantly higher speeds while they backup? My source SAN’s are good. IBM V7000’s and FS7200 flash systems. My Veeam SAN is an older IBM V7000 with all spinning disk which could probably use an upgrade, but performance doesn’t change much backing up to SSD for me. I have all fiber, and 16GB for everything but 8GB for Tape as that is the LTO8 max.I created about 8 Volumes on the SAN as per IBM best practices to use more of the cores on the controllers, but doubtful it makes a diff
I recently ran in to an issue where we had a few of our SOBRs offloading to OBS which was added using a Gateway server for access. There was an issue where it told me that we were out of disk space but thinking to myself that cannot be as we have Petabytes of storage for Object.I investigated the logs and even worked with support where we found out the issue was on the GW server which uses the C:\Windows\Temp folder by default to store data before offloading to the Object Store.Well it is nice to see there is now a KB article that shows how this can be fixed in three ways. If you have this issue check it out here - KB4283: Scale-Out Backup Repository Offload task fails with "There is not enough space on the disk" (veeam.com)The options are -Increase the space on the GW server - so your C drive Move to another GW server with more space for C drive Change the VeeamBackupTemp location to another drive with lots of space
I have two data center sites, one in Indianapolis and the other in Olympia. We have Dell Data Domains at both sites as our backup storage. I am currently in the process of moving our cloud connect storage from an old HydraSTOR to the Dell Data Domain. With the hydraSTOR I was able to seed backup copy data from a portable NAS directly onto the cloud connect repository as it was presented as a CIFS share. With the Data Domain it is using something called ddboost and I’m not sure how to run a copy job to import that initial backup copy data onto the Dell Data Domain. Has anyone else been able to successfully seed backup data onto a Data Domain?I need to…run the initial backup copy to a local/portable NAS on the customer’s site disable the job ship the portable NAS to our data center copy backup files onto the Data Domain configured with DDBOOST update the backup copy job to use the cloud repository enable the job to run confirm the job is running successfully and call it doneStep 4 is whe
I was thinking in how to collect those “epic Phrases” you had heard regarding backup justifications, the worst ones, in my personal experience were:We don't need that, I usually create a copy on my Desktop! Backups are for weak IT, we use snapshots…Please share your best phrases to have fun, and share those “crazy moments” with a bit of humor!😂
I wish to build a new HOMELAB. Here is a guide and many others I found on Reddit.- How is your lab setup? What would you recommend? Main virtualisation solution would be VMware (would need multiple hosts here). I would also have HyperV and Proxmox VE within this environment. Yes, this is possible!https://www.altaro.com/vmware/perfect-homelab-vmware/?vgo_ee=%2BWombPttbhpwDFBk3C2zvW7%2BX%2FpMfSStElYb34Re%2FNg%3D (I love the first approach in this guide).I have got my ideas, but would like to learn from you. Keep in mind, I need a cost effective solution ranging from hardware to Power etc. Here is a talking point but just talking about the installation. I do NOT need help with the setup (deployment) tips as this is a walkthrough for me. Just ideas on how to setup low a cost effective lab, but yet a Lab to be reckoned with.
What is the best compression or dedupe you have received with Veeam.Feel free to expand on what type of job, compression settings, type of VM (sql, app, web server etc). If it is a file server what type of data does it host? For SQL I can get up to 7x Dedupe and 4.5x Compression I have a few application servers around 20x deduplication in my environment as well. The highest application server I have with an internal DB hit 46.8x Dedupe and 1.9x compression which is pretty crazy. The full backup was 13GB on a pretty large VM haha. I have a few big file server 30+TB that get 4.7x dedupe and 1.7x compression for example. Pretty good to see 30TB shrunk down to 4TB or so. I also have some 30+TB VM’s that get 1.1 dedupe and 1.0 compression . All videos however.
Login to the community
Log in with your Veeam account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.