Blogs and podcasts
Bring your knowledge and expertise while creating blogs and podcasts
- 603 Topics
- 4,561 Comments
As you probably have already noticed:VMware released vSphere 7.0 U2 recently This is currently not supported by Veeam. I want to share some important news for this new version that is relevant to backup. See this link for the complete list of core storage improvements:https://blogs.vmware.com/virtualblocks/2021/03/09/vsphere-7-u2-core-storage VMFS SESparse Snapshot Improvements Read performance will be improved by redirecting reads to where data is located (chain or base-disk). Up to now, reads go through the whole snapshot chain and the base-disk. So when you read not-changed data, systems reads from base-disk directly, instead of checking snapshot chain as well. Performance Improvements on VMFSImproved write performance on thin provisioned disks. This is about first writes. It should reduce the potential effects of first writes when using thin-provisioned disks. NFS Improvements I am not very experienced with NFS in vSphere, so this is the improvement:With the release of vSphere 7.0
At the last days I had an issue with job notifications not sent from the VBR Console after modifying the users and roles within the console.One of my clients demanded that the access to the VBR console has to be restricted to some explicitly defined backup Admins and no one else.So, I removed the local Administrators group from the list in the users and roles and added the personalized accounts of the backup admins.Fine, the backup admins can start the console and work with it, and all other accounts have no access…The next morning the admins told me that no job notifications were sent in the night. My first thought was that the colleagues responsible for the mail server and/or the firewalls had done some changes and now the backup server cannot reach the mail server. But after checking with them no changes were done and there were no dropped mails at the mail server. So, it seems that the VEEAM server did not send anything at all…I saw then that some mail from PowerShell scripts were
Borg and Kubernetes Since the CKA Kubernetes certification is a practical exam with no multiple-choice questions you must wait a day or two until you find out your result. I was on sitting on my back porch when I checked my email and saw the ‘Congratulations” in the title and I literally shouted, “I have Kubernetes!!”. My neighbor who is not IT savvy and witnessed my strange behavior immediately thereafter began doubling his social distancing measures with me. This was the “Covid19 summer” of 2020, and I realized that many people had no idea of what Kubernetes was and to be fair it does sound like something that you can catch.So, what is Kubernetes and why is it being talked about so much?To try and explain why this has become such a hot topic I like to think back to the virtualization revolution. It used to be that when a company wanted to add a new application server the process was a very long and labor intensive one. You had to order the physical server, then you had to rack it, ca
As we discussed here earlier, there are 3 transport modes to get data from vSphere for backup.These modes are also available for restore. By default, first mode (order: SAN, Hotadd, NBD) that meet requirements, is selected for restore. For SAN direct mode, a requirement is thick provisioned disk type. You can select each available type (thin, thick eager and lazy zeroed) in restore wizard.My recommendation: If you want to leverage SAN direct mode, choose Thick eager zeroed! This option is much faster than lazy zeroed!I tested these settings in different environments. For example, I saw differences of67 % (150MB/sec for eager vs. 100MB/sec for lazy), 93 % (226MB/sec for eager vs. 16MB/sec for lazy). See here different wizardsVM restore VM Disk restore Interesting LinksIn documentation of version 9.0, you see a hint, for lazy zeroed, vCenter is needed for zeroing.https://helpcenter.veeam.com/backup/vsphere/direct_san_access_writing.html Since v9.5 no difference is made anymore bet
Understanding Kubernetes Networking can be a challenge. A couple of years ago I was tasked to setup a distributed Minio instance running in containers for use with a Veeam SOBR S3 compatible capacity tier. At first, I thought about doing it on Kubernetes but very quickly realized that I was in over my head. I had no previous experience with Kubernetes and I could not just “wing it”. Among other things the networking piece I found especially hard to understand.In the end I created a Docker Swarm cluster which had a much easier almost “plug and play” overlay network and while that did the trick, I understood that simplicity also meant rigidity.Kubernetes follows the age old *nix (Unix, Linux BSDs and so on) philosophy of creating small separate entities that when brought together can scale into something very complex. Networking is no exception.While a Kubernetes cluster does come with some default networking called kubenet it is very limited and not meant for production environments fro
While updating our VCSP stack with V11-RTM, I just stumble again onto this small but fine new feature to prioritize your primary backup jobs:More than once I had customers request several jobs to be finished with priority.You could of cause concatenate jobs with the schedule option “After this job:”. But this is not recommended as it does not use the ressources of your backup system accordingly. Valuable proxy/repo slots would linger out-of-work for quite a while.Overlapping jobs would lead to the second job jumping in with the first one still running, so saturating ressources. But this might easily oversteer and thus delay the first job.With the new option this problem can be solved by just the click of a checkbox.Prioritized jobs get a nice little flag in the job list. So how does it work?The Veeam scheduler runs all its tasks according to priorities:800: Restore jobs — restores are obviously the most important jobs 700: Continuous data protection jobs — new CDP ist also considered v
It is new in v11, but I do not have much detailed information about it.When you create or edit a backup job, you can enable High priority.The idea is to use this option to make clear this is an important job. So it can be started before less important jobs are started. It is not about job performance, it is about start time. Makes perfect sense to me! Under certain circumstances this feature will make job scheduling easier.At first this seems to be available for Backup and Replication jobs.
Hi Folks,I have written up some quick instructions on setting up Minikube a single node Kubernetes cluster on your laptop. If I have missed something or if anything is unclear please reach out to me. The great thing about Kubernetes is that you can take it anywhere. If just want to familiarize yourself with Kubernetes and do some testing, then Minikube is an easy to install nonproduction single node Kubernetes cluster that you can install on your laptop. I am using my windows 10 Lenovo Thinkpad. First, we need to enable the Hyper-V role (or install Oracle VirtualBox if your laptop OS will not run Hyper-V) on your laptop. Mac users can use brew https://gist.github.com/kevin-smets/b91a34cea662d0c523968472a81788f7 To enable the Hyper-V role, follow these instructions:https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v INSTALL MINIKUBEMinikube is a single node Kubernetes cluster which allows you test Kubernetes.There are two ways to install Minik
I was thinking about building a lab at home to play, test and learn certain topics. But as my free time is very limited and I didn't want to ruin our power costs, I came to the idea to run a lab in the cloud.Advantages:low entry costs: you pay what you use/need flexible/scalable: resources are only a few clicks away ability to create different scenarios: offsite, DR, ... new technologies = more knowledgeDisadvantages:long term costs: as long as you're using/reserving resources you need to pay for themBeing a VMware guy I've looked for ways to deploy a vSphere environment or at least an ESXi host, but unfortunately the costs were much to high. Either I would have to go with VMware Cloud or rent a bare-metal machine to install ESXi myself; both are very costly.Hyper-V on the other side is easier to deploy in the cloud, without spending too much; as long as the base system support Nested Virtualization. So I've decided to give it a try and went with Microsoft Azure.Microsoft has published
CDP - Retention Policies A retention policy defines for how long Veeam Backup & Replication must store restore points for VM replicas. Veeam Backup & Replication offers two retention policy schemes:Long-term retention Short-term retentionLong-term RetentionVeeam Backup & Replication retains long-term restore points for the number of days specified in CDP policy settings. When the retention period is exceeded, Veeam Backup & Replication transforms the replication chain in the following way. The example shows how long-term retention works for a VM replica with one virtual disk.Veeam Backup & Replication checks whether the replication chain contains outdated long-term restore points. If an outdated restore point exists, Veeam Backup & Replication rebuilds the file that contains data for the base disk (<disk_name>-flat.vmdk) to include data of the file that contains data for the delta disk (<disk_name>-<index>.vmdk). To do that, Veeam Backup &
Today I coming here for a simple post about compression Veeam B&R. So, I take a short VM Linux running an Active Directory for this. Below you can see a full backup with a compression set up as Optional. And now you can see the same full backup with compression option on Extreme. How you can see the duration time was decrease in almost 20 minutes when I compress backup on extreme option. The target of backup on this example is an offsite repository. It has a low bandwidth if we compare with a local one. So, when we are planning our offsite backup policy, the extreme compression can help to get better times on your backup execution window.
Another great little thing is the possibility to display timestamps in job log in VBR console. As you know, logs in console look like this:You see start and end-time. But when happened steps in between? With v11 you can display timestamps too! Just right-click in headline and click Timestamp In my opinion this is very helpful!
Finally v11 is launched, but I will not stop to talk about small features and improvements For today, I found a modern (app) feature. It is about instant virtual disk recovery. Which is not new in v11, but it is new to restore a disk as a First Class Disk (FCD) in vSphere. A FCD - also called Improved Virtual Disk (IVD) - is a new disk type for modern - containerized - applications. It can be used in kubernetes environments, like VMware Tanzu, to provide persistent storage.More details about FCD you can find here:https://cormachogan.com/2018/11/21/a-primer-on-first-class-disks-improved-virtual-disks/ https://cormachogan.com/2020/01/14/first-class-disks-enhanced-virtual-disks-revisited/Idea for short: manage virtual disk easily even if it is not attached to a VM.With v11, a virtual disk can be instantly recovered as FCD. This also means, you do not have to attach the disk to a VM at restore time - this can be done afterwards! This is not possible when instant-recovery as "normal" VMDK!
Continuous Data Protection (CDP) is a new feature provided by the upcoming Veeam VBR v11 mainly addressed for Tier-1 applications that cannot deal with data loss.Announced for a long time, the CDP technology allows you to configure the RPO Per Second and this can be done leveraging vSphere API for IO Filtering (VAIO). Install the Continuous Data Protection filter to hosts First step is the installation of the CDP filter to the hosts member of the cluster. The installation takes place in the Managed Servers section under Backup Infrastructure area. Before proceeding make sure your Veeam Server has been registered in your DNS infrastructure.Once the vCenter Servers have been added to the Veeam infrastructure, right click the source side vCenter and select Manage I/O filters. Configure the CDP Proxy To leverage CDP, it is required the deployment of the new CDP Proxy to handle the data. Although you can install different roles in the chosen Windows Server, make sure CDP Proxy is not alread
[Veeam 1-minute] How-to create a Data Protection Policy for Amazon RDS using Veeam Backup for AWS v3
Hi guys, another quick one-minute video, today there is about How-to create a Data Protection Policy for your Amazon Relational Database Service (Amazon RDS) using the new Veeam Backup for AWS v3Remember that you can get a fully functional (Free Forever 10 AWS instances) from here - https://www.veeam.com/aws-backup.htmlLet me know what you think for the format, and if you are using RDS Protection :)
Morning all! With under 8 hours to the launch event, make sure to sign up!Join the Live Event for V11 Launch (veeam.com) Here’s the time the launch event kicks off in some time zones: AWST (Australian Western Standard Time): Midnight (25th Feb)EET (Eastern European Time): 6PMEST (Eastern Standard Time): 11AMGMT (Greenwich Mean Time): 4PMPST (Pacific Standard Time): 8AM Time to level up to 11!
Hello,In this article, we will take a look at the scenario of Restore via S3 Compatible Object Storage. Part 1 - Controlling Our Cloud BackupsWe have successfully backed up our local.Thanks to the Scale-out Backup Repository capability, we have successfully sent the backup we have taken to our local to S3 Compatible Object Storage, which we added in the first document.We check the Backup status we have taken for the test environment from the Backup> Disk section on the left, the point we have to pay attention to in the Repository section is included in the 'Scale-out Backup Repository'.In the Home Menu, we enter the Backup> Infrastructure menu at the bottom left.We can view the size of our backup file that we have sent to the S3 Compatible Object Storage storage pool we haveadded in the Backup Repositories area.Part 2 - Restoring from S3 Compatible Object StorageWe will restore the backup that we sent to S3 as read-only using the Immutable feature to our environment, first we ent
Because this article caught my eye and the topic is still important.https://blocksandfiles.com/2020/12/04/the-terrible-tib-gib-and-pib-game/You probably know, there is difference in the way hardware and software vendors quote their capacities.Normally hardware vendors use KB, MB, GB, TB, … Here 1 KB means 10^3 byte (base 10) = 1000 bytes (k stands for kilo which means 1000)Software vendors instead use KiB, MiB, GiB, TiB, … Here 1 KiB means 2^10 byte (base 2) = 1024 bytes. As you can see, size-difference with KB vs. KiB is quite small: 24 byte (2,3 % less capacity with KB)When it comes to greater unites, difference is getting larger:1 GB = 10^9 vs 1GiB = 2^30 = 1 073 741 824 --> ~ 7 % less capacity with GB 1 TB = 10^12 vs 1 TiB = 2^40 = 1 099 511 627 776 --> ~ 9 % less capacity with TB (~90 GiB) The greater the unit, the greater the difference between base of 10 and 2.I (quickly and dirty) created a small graph to show percentage increase over units1=KB, 2=GB, 3=TB, ...Another out
Greetings friends, today I bring you a new entry about Grafana and Veeam, which I'm sure you'll like and I hope you'll put it in your collection. Veeam has recently announced Veeam Backup for AWS v3. Among the many features included in the product, in v3 it is possible to protect RDS and VPC, as well as EC2 instances, of course, one is a public RESTFul API that has been updated to v1.1, and I thought it might be a good idea to create a Dashboard for this solution.Today, I am pleased to bring you a complete and finished Dashboard for monitoring Veeam Backup for AWS, with no limit of VMs, jobs or Repositories.You will be able to see that there is a Map inside the Dashboard, and that is because I consider very important to be able to look globally and see what regions we have with unprotected VMs. Dashboard for Veeam Backup for AWSWhen we finish the entry we will have something similar to that Dashboard that will allow you to visualize:I leave you a video summary if you want to see a litt
Veeam Backup for Microsoft Office 365 Calculator(Download: VBO-Calculator-Community-1Download)Welcome back after a longer break due to holidays, personal changes and some challenges over the last weeks.As one of the main responsible german Systems Engineer for Veeam Backup for Microsoft Office 365 (VBO) I received a lot of questions and requests regarding the right sizing for a VBO-environment as well as for the “new” Object Storage Repository feature, which is available since version 4.Wouldn’t it be great if we had a tool for this ? For sure ! But let us first build up some tension. Before we concentrate on object storage I will give you some important information for your general VBO-Sizing. The SizingWhat do you need for a proper sizing?To get some realistic numbers, we collect the informations per service, because in the background each service has its own compression capabilities and it does make some difference later in the cost calculation.So we do need the amount of users per
Hello everyone! So, last week we have our first VUG of the year. It was a such good webinar that the principal subject was around the best features about data protection. If it weren't good enough, we still had the presence of none other than @Rick Vanover and @Kseniya. To start the event Rick and Kseniya talk about our Veeam Community Resource Hub and after this, they introduced how Veeam Legends works and what is the benefits to become a Legend. It was so nice listen them and I felt myself so counfortable with their words. Rick and Kseniya look like Batman and Robin of Veeam because they are very much involved one with other about Veeam Community. Seriously both of you are awesome!! After this amazing introduce we start our event completely in portuguese and @CarlosXGomes talked about backup of Office 365. Continuing with VUG Brazil Marcio Freitas made a perfect presentation about NAS Backup and all possible subjects around of this theme. So, to finish it Leonardo Ferreira talked
Hello Community!For years, I have been Looking for the Perfect Dashboard, using different technologies and approaches, which includes Veeam ONE, Elastic, Splunk, and Grafana. All the mentioned technologies are great for deep visibility, log inspection, capacity planning, etc.But as Veeam has been moving into the right direction of including RESTful API on all products, I finally decided myself to use a combination of Bash Shell Script, InfluxDB, and Grafana.Here it is myself with a Grafana t-shirt I made, on the example monitoring Veeam :) How does all of this looks like?So, if you are wondering how all of this might look, it is indeed quite easy, so on the left we have all sort of Veeam Products, with their RESTful API, in this case:Veeam Enterprise Manager - https://helpcenter.veeam.com/docs/backup/rest/overview.html?ver=100 Veeam Backup for Microsoft Office 365 - https://helpcenter.veeam.com/docs/vbo365/rest/overview.html?ver=40 Veeam Backup for Azure - https://helpcenter.veeam.com/
Today’s #VMCE2020 #DailyQuiz walkthrough is ready. Are you using the Secure Restore feature? Which AV vendor are you using? Did you realize you can use other vendors too?Try this question and more at https://rasmushaslund.com/vmce-practice-exam/
The Tape Storage Council has released summary of 2020’s usage of tapes and a future outlook:https://blocksandfiles.com/2020/12/07/tape-storage-council-2020-outlook/Summary:Capacity shipments rose to record amounts in 2019 >225 million LTO cartridges >4.4 million drives [IBM and Oracle enterprise tape shipments are not included] Tapes are still cheaper than disk Several public cloud deep archives use tape, such as AWS’s Glacier ESG found a majority increasing their commitment to tape (61%) Tape’s two big advantages: low cost longevity Big drawback: lengthy file access time
If you know that your primary VMs are about to go offline, you can proactively switch the workload to their replicas. A planned failover is smooth manual switching from a primary VM to its replica with minimum interrupting in operation. You can use the planned failover, for example, if you plan to perform datacenter migration, maintenance or software upgrade of the primary VMs. You can also perform planned failover if you have an advance notice of a disaster approaching that will require taking the primary servers offline.When you start the planned failover, Veeam Backup & Replication performs the following steps:The failover process triggers the replication job to perform an incremental replication run and copy the un-replicated changes to the replica. The guest OS of the VM is shut down or the VM is powered off.If VMware Tools are installed on the VM, Veeam Backup & Replication tries to shut down the VM guest OS. If nothing happens after 15 minutes, Veeam Backup & Repli
Login to the community
Log in with your Veeam account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.