Join discussions around Veeam community projects, Veeam events, industry and technology news
- 1,179 Topics
- 11,633 Comments
Hi,we are moving a lot of environments to Azure cloud and we already have 150 Blob storage containers configured to make the backups using Azure Backup. The problem is that they offer only 300 days of retention policy and i need more time for LTR and granular backups (daily, weekly, monthly, yearly) for compliance and governance purposes. As we are studying to move the backup workloads to VAB and VBR at azure, but we felt that this feature for blob storage container its missing in the VEEAM tools for Azure.This amount of blob storage container it is just the begining, because aplications moved to Azure will use their individual storage account, instead of using a centralized file server to persistent data.Thanks a lot and hope you guys help us with this request.
Hey folks, Just had a quick call with Kasten to help me find the issue with my snapshotting. The issue was not a Kasten problem. Kasten’s great tool Kubestr that I posted about here earlier told me that I had a problem which I immediately saw when trying to run a backup with Kasten. In a quick call Kasten support very efficiently found the issue with my snapshot controller.. or lack of it for some strange reason, and we downloaded the manifests again, problem solved.. Just thought that I had to mention this since great support makes our lives so much easier!!
Running Veeam B&R 220.127.116.111 (11a? and fairly current)I was advised by our veeam rep to use a backup copy job to accomplish what I was trying to do. Basically:we have a daily backup job to disk with 60 days of restore points, and weekly full backups on Saturdays. I want to move a copy of ONLY the weekly full from this job to a different datastore that’s part of a scale-out repository, most specifically to use the backup copy job’s ability to apply GFS retention to what it’s copying Unfortunately the scheduling options for the periodic backup copy job leave a lot to be desired.The only option you have is “run this job every X days at Y time” - that’s it.Considering that normal backup jobs get a wide variety of scheduling options (most importantly in this case “run on specific day of the week” or “run after specific backup job”) it seems odd that none of these extremely useful options are available within backup copy jobs.Instead the only option is “run every 7 days” which means I
Happy Friday everyone! For those of you with access to a lab/test environment, what do you use when you’re “on the move”. Maybe it’s public cloud, maybe a VPN back to a DC/Home, maybe it’s a portable enough workload you can run it from your laptop. Curious to hear how people tackle this when mobile 🙂
Trying to clean up data from an old backup copy job, after deleting the backup copy job.It’s now showing under Disks(Orphaned) and Object Storage(Orphaned) - All needed restore points are available from Capacity tier(Azure Object storage). No more restore points are available to move to Capacity tier(because all have been moved already).Still multiple TBs are stored on the Performance tier, but if I select Delete from Disk, it tells me it will delete the dependent restore point also in the Capacity tier…? There are 3 restore points available on Performance tier, and 60+ on capacity tier.I would like to keep all restore points in Capacity tier until I manually delete them according to retention policy. Can I say Yes to this, and still have restore points available from Capacity tier?
Additionally, for users with tape installations (for file to tape jobs processing more than 1,000,000 files):1.5 GB RAM for file to tape backup for each 1,000,000 files 2.6 GB RAM for file restore for each 1,000,000 files 1.3 GB RAM for catalog jobs for each 1,000,000 files Please anyone expation what its means?
Hi,I need to find a way for backup verification test for agent backup systems. These are physical machines. (sql, exchange, oracle). VMs are working fine. I could do manually or script based instant recovery to vmware. But those tests would be in production an not seperated in a virtual lab. Any ideas? Is this feature on a roadmap?Any news for supporting NSX-T nics in VLAB? Thx and GreetingsHartmut
You have Enterprise Manager in your infra? What would you add or take out of? how much use is it giving you?
In VBR v11 v11 there are several improvements on User Interface –Console And Enterprise Manager But are you using all the features that Enterprise Manager has?One of them In v11 there will SAML For vSphere Self-Service Portal•Added support for SAML accounts (e.g. at service providers)•External users and groups•Group quotas have usual behavior•Delegation mode: vSphere tags
Running Veeam free version for Linux on Dell Optiplex 5060, OS is Linux Mint Mate 20.3. Just setting up this machine and moving from an older Dell pc. In old pc, ran Veeam free version for years of weekly b/u’s, no issues. Veeam downloaded and installed via Synaptic and set for full system image, target is ext. HDD, USB connection to computer.In new machine, 5 b/u’s have failed, see s/shot. Targets were 2 different 1tb ext. disks, each formatted to FAT32, exFAT, and then EXT4. One of the disks is the same one used for the backup chain in the old pc, no problems.. All failed attempts show the same bottleneck, “source” after running for 2 to 8 minutes. Not clear what to do here since no previous snags w/ Veeam. Any suggestions appreciated!
Hi,we have 4 different backup priorities and vCenter Tags which our VMs are assigned to. Now i would like to implement 3 different vCenter Tags for application-aware processing options. Unfortunatly i can’t remove the Prio4 Tag from aap settings, because it’s used for default. Whats happens if a VM has assigned 2 Tags in aap options? Is there some kind of hierarchy which option will be used? What would be best practise in this case?
Not sure if anyone is using CDP yet but you can run in to an issue with Veeam stating that the storage providers are offline. You can see how to obtain the fix from VMware in the following KB.KB4242: CDP filters installation fails with "Storage providers offline" (veeam.com)This all relates to the SPS certificate in VMware.
I signed up for a 30 days trial for the MS365 backup. I am backing up my Exchange server into a local repository. I was wondering if MS365 fails. How do we recover from the files we have in the local repository? In other words is if MS365 one day decided not to function anymore. How do we recover our exchange files from the repository that Veeam back up to?
Hi,I have a setup kasten10 on a microk8s single node cluster on a local server to backup my kubernetes cluster.Everything from backup perspective is working fine so far, but i have a problem with the ingress and ACME challenge with letsencrypt.The ACME challenge is working for my other services but I cant get it working with kasten10.k10-ingress:spec: ingressClassName: public rules: - host: kasten.dummy.com http: paths: - backend: service: name: gateway port: number: 8000 path: /k10 pathType: Prefix tls: - hosts: - kasten.dummy.com secretName: secret-kasten.dummy.comError on my ingress pod:[error] 2102#2102: *88192 upstream timed out (110: Operation timed out) while connecting to upstream, client: 192.168.1.1, server: kasten.dummy.com, request: "GET /.well-known/acme-challenge/<challenge-code> HTTP/1.1", upstream: "http://10.1.206.218:8089/.well-known/acme-challenge/<challenge-code>", host: "k