News, guidelines and various community projects
Hi,we are moving a lot of environments to Azure cloud and we already have 150 Blob storage containers configured to make the backups using Azure Backup. The problem is that they offer only 300 days of retention policy and i need more time for LTR and granular backups (daily, weekly, monthly, yearly) for compliance and governance purposes. As we are studying to move the backup workloads to VAB and VBR at azure, but we felt that this feature for blob storage container its missing in the VEEAM tools for Azure.This amount of blob storage container it is just the begining, because aplications moved to Azure will use their individual storage account, instead of using a centralized file server to persistent data.Thanks a lot and hope you guys help us with this request.
I just want to say how grateful I am for the Veeam culture and the way they operate as a company and their extensive involvement in the Veeam community. I have extensively also used a competing product and I feel largely ignored by them and I have deep seeded impression that the company’s engineers couldn’t care less about the customers’ ideas and concerns and believe they know better than the customer and just choose to ignore 99% of what is suggested to them, meanwhile they seem to have the development acumen and abilities of first year college students. Meanwhile, a company like Veeam, that is a giant and a powerhouse, goes out of its way to not only make an amazing product, but also is incredibly involved with their client community, includes them in the decision making process through the Veeam Legends and Vanguard programs, and goes out of their way to make sure their customers know they are heard and cared about. I have so much respect for @Gostev @Rick Vanover and the entire V
Hello Veeam community!I have been trying to determine best approach for a backup migration from one object storage bucket to another and need some help. My customer has a large amount of data in the 100’s of TB’s of backups and a bucket copy will take 30 days which is too long. They also don’t have the space to pull it all back down locally to copy to the new bucket so that option won't work. I am going to have them setup a new capacity tier in SOBR and start new jobs going to the new bucket. But the issue is the old backups on the old bucket are left orphaned. i need to move some but not all of the backups due to retention policy. I was trying to determine which backup objects are related to which backup jobs but can’t determine backup_id via powershell. my thinking is this would allow me to “grab” the backups that need to be moved to the new bucket. Does anyone have a helpful tip on how to get this accomplished? Thank you!
Over the last several years Microsoft has been moving into Microsoft 365 with their services. With this migration they have introduced many new features into the eco system to make day to day communication and processes smoother. In this same transition Microsoft has also moved to depreciate legacy technologies for newer features that are purpose built for the ecosystem and modern data communication. We are in the process of what is now the second depreciation cycle of legacy technologies in Microsoft 365. The first being when on-prem services did not make it to the cloud and now with the deprecation of legacy APIs on January 31, 2023. Its important to note that depreciated does not mean these services are nasally going to be shut off but further support by Microsoft and third-party access to these services will also be limited if at all. In this case, services in Microsoft 365 that did not make the cut, by Microsoft, to be coded into the Microsoft Graph API with modern APIs will no lo
Hey guys, I have two questions For veeam backup for nutanix, after I install a nutanix plugin on VBR will veeam setup a nutanix proxy VM or should we setup the proxy VM in the nutanix cluster? If VBR backups to a repository, later VBR crashes and I deploy another VBR and add the repository which has the backups stored, will this new VBR able to access or see the backups stored?
The sequel of part 2 😉And it seems that a part 4 is also needed 🙄 Backup copy job : VBR creates the first time of course a full backup file on the currently attached disk When you swap the disk and if it is empty, it will create a full backup on itIf there is a backup chain on it for this job, it will create a new incremental in the current chain. The latest incremental is the start point of the new incrementalThe retention policy will be applied, so all backup-files that are outdated will be removed from the backup chainBecause I always use a backup copy job in combination with rotated drives as a offline/airgapped solution I always recommend the customer to swap the disks on a weekly basis. Therefore I mostly configure the job with 7 to 10 restore points meaning 1 full and 6 to 9 incrementals.I don’t care about the total wanted restore points, I see the number of restore points per disk. I tell you later why.Normally the disks chosen are having a capacity of +/- 2 times a full back
Morning, Just a quick one from me today I wanted to share.I was playing with VDRO, and scratching my head as I was playing with restore plans, but I couldn’t find a very crucial step. ‘Prepare DC for DataLab’. It’s documented here: Prepare DC for DataLab - Veeam Disaster Recovery Orchestrator User GuideI’ll quote the key text as to why we need this:This step is required for a domain controller to be started in a test lab environment. This must always be the first step for the domain controller in a lab group.This step ensures the VM will reboot to exit DSRM (Directory Services Restore Mode) and therefore will function correctly as a domain controller in the lab. Seems pretty important right?Well, when I went to create my restore plan, I couldn’t find it, even the Veeam documentation doesn’t show it within the list:After a lot of head scratching, turns out, I was missing something that is a bit light on helpcenter documentation! You need to define this at the DataLabs level:Once added,
Hi,I have a setup kasten10 on a microk8s single node cluster on a local server to backup my kubernetes cluster.Everything from backup perspective is working fine so far, but i have a problem with the ingress and ACME challenge with letsencrypt.The ACME challenge is working for my other services but I cant get it working with kasten10.k10-ingress:spec: ingressClassName: public rules: - host: kasten.dummy.com http: paths: - backend: service: name: gateway port: number: 8000 path: /k10 pathType: Prefix tls: - hosts: - kasten.dummy.com secretName: secret-kasten.dummy.comError on my ingress pod:[error] 2102#2102: *88192 upstream timed out (110: Operation timed out) while connecting to upstream, client: 192.168.1.1, server: kasten.dummy.com, request: "GET /.well-known/acme-challenge/<challenge-code> HTTP/1.1", upstream: "http://10.1.206.218:8089/.well-known/acme-challenge/<challenge-code>", host: "k
For those of you that do O365 backups with Veeam today v6 was released. You can find information and download links at the following KB site - KB4286: Release Information for Veeam Backup for Microsoft 365 6.0 Be sure to read the upgrade notes as there are some gotchas there.
Hey folks, Just had a quick call with Kasten to help me find the issue with my snapshotting. The issue was not a Kasten problem. Kasten’s great tool Kubestr that I posted about here earlier told me that I had a problem which I immediately saw when trying to run a backup with Kasten. In a quick call Kasten support very efficiently found the issue with my snapshot controller.. or lack of it for some strange reason, and we downloaded the manifests again, problem solved.. Just thought that I had to mention this since great support makes our lives so much easier!!
Running Veeam B&R 184.108.40.2061 (11a? and fairly current)I was advised by our veeam rep to use a backup copy job to accomplish what I was trying to do. Basically:we have a daily backup job to disk with 60 days of restore points, and weekly full backups on Saturdays. I want to move a copy of ONLY the weekly full from this job to a different datastore that’s part of a scale-out repository, most specifically to use the backup copy job’s ability to apply GFS retention to what it’s copying Unfortunately the scheduling options for the periodic backup copy job leave a lot to be desired.The only option you have is “run this job every X days at Y time” - that’s it.Considering that normal backup jobs get a wide variety of scheduling options (most importantly in this case “run on specific day of the week” or “run after specific backup job”) it seems odd that none of these extremely useful options are available within backup copy jobs.Instead the only option is “run every 7 days” which means I
HPE and Veeam are connected by a long history. HPE VSA and 3PAR were the first storages support integrated hardware snapshots. Nimble and Primera are supported as well.See this Veeam Customer Reference Book to read about 7 customer stories. Involved Industries are: Fintech, Retail, Non-Profit, Manufacturing and Healthcare. Quite impressive stories with real-world values!
Login to the community
Log in with your Veeam account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.