Hi,
Wanted to try and bring some of the magic of the Veeam 100 Summit to the Community Hub, as so much of Veeam’s success comes from the community!
I’ll be blogging here each day bringing some highlights from the summit.
Hope you enjoy it!
Hi,
Wanted to try and bring some of the magic of the Veeam 100 Summit to the Community Hub, as so much of Veeam’s success comes from the community!
I’ll be blogging here each day bringing some highlights from the summit.
Hope you enjoy it!
And GREAT to see you here this yr
Day 0 Kicked off with an amazing Kasten K10 training session by none other than
I was fortunate enough to have worked through the labs provided in a prior training session with Kasten, and took the time to instead improve my own notes on kubectl & Kasten’s capabilities.
Michael Cade made some great points, such as how we never got rid of ‘physical’ servers when we mass adopted virtualisation, so why do we keep thinking Kubernetes is Containers vs VMs, instead of Containers & VMs, and I truly agree this is a future that we’ll see. We still use physical firewalls for performance purposes even though they’ve been available virtually for many years, so why wouldn’t the same be true that we’ll architect solutions based on the best technology available at the time? Container, physical or VM?
If you’re interested in learning more about Kasten, speak to your Veeam account manager or check out Kasten’s free training @ Free Kubernetes Training (kubecampus.io
Then Day One kicked off this morning with the introduction to the V100 summit
Hearing from Rick, Safiya, Madi, Nikola, Hannes and Fabian spending so much time talking about community and their community focused efforts is great to hear. I’ve yet to meet another vendor that is so ‘hyper focused’ on the community and the feedback it provides!
When Fabian took to the stage we saw the announcement of the ‘Veeam Early Access Program’.
If you’re wondering what this is, in summary:
Great to finally meet you this year Michael. Going to do a similar thing myself each day.
Next we jump over to a word from some amazing sponsors, first up we had Object First!
One of the key takeaways I found from this was understanding the benefits of the communication between the Object First Appliances and Veeam. By leveraging the SOS API and utilising Storage Access Controls, it enables Veeam to be fast and smart with data placement. For example:
Also got some interesting metrics on their performance:
The Object First appliance is ‘out of the box’ hardened, what does this mean? Simply, it’s a hardened Linux OS with no root access, supporting S3 object lock running in compliance mode + MFA! This combines to make data destruction a major headache for malicious attacks, would probably be easier to break into the datacentre and physically destroy the device at this point!
Following Object First is Wasabi!
None other than Wasabi’s amazing
Drew highlighted the ‘Wasabi’ advantages (yes plural!), I think by now most people are aware that Wasabi doesn’t charge egress fees or API calls vs ‘the big three’ hyperscalers everyone tends to think of, but above and beyond this, Drew discussed what’s happened over at Wasabi in the past year, which was a welcome recap to have!
In the case of features such as customized endpoint URLs, it was great to see Wasabi’s focused on not bleeding partners, by making this a one off fee to enable, and with their Wasabi Account Control Manager (WACM) product being free!
These are real cool for each session’s summaries. Good job
Hijacking this thread a little bit Sir
There is a strong focus on Security at this years Summit.
Additionally, the chance to get to speak to the actual Developers working on this functionality is awesome.
Fresh from a break we join Dima P & Egor Y on the topic of Malware Detection!
This was a brilliant section discussing the many ways that Veeam can help in the detection of malicious attack.
Firstly, there’s an online/inline scan that takes place when a backup is running, it’s utilising a tool called MAGIC to index blocks across the system and discover what has changed, and create a likeliness score of risk that is used to report suspicious restore points.
In addition to this MAGIC can report on high risk files such as .onion links and ransom notes as an immediate high risk trigger. A great reminder was shared that .onion doesn’t just mean ransomware, it could also mean data exfiltration.
This feature comes at a CPU cost, with expectations of a 25-30% CPU utilisation uplift. And the index used for restore point comparison being stored within the configuration database, instead of a file on the VBR server that can be directly tampered with.
It was also highlighted that the first backup after enabling this feature will require a full read of the production system (even though only an incremental backup is created) as the index needs to be built.
Next they discussed Guest Index Scanning, looking for suspicious files, excess changes or loss of trusted file types as examples of post-backup scanning as ways to determine an attack.
Guest Index scanning requires additional CPU & RAM on your backup infrastructure.
Guest Index scanning is supported for VMware & Hyper-v based Windows & Linux VMs, and Veeam Agent for Windows.
A shout out to
This topic was followed up with the Incident API, this API allows third-party solutions to task Veeam with actions in scenarios, a key one being trigger a backup if ransomware is detected/suspected. Anton Gostev discussed this himself at the VeeamON resiliency summit reminding people that although a backup might take a while after this instruction, a snapshot is created ‘instantly’ minimizing data loss.
Then we moved onto the Secure Restore & SureBackup relationship. YARA rules are going to be supported in v12.1, and these will be run on the Mount Server. Veeam recommended that you utilise the ‘scan backup content with an antivirus software’ option, until you know what you need a YARA rule to detect, then you can leverage the YARA scan for incident analysis/investigation.
Also in v12.1 we’ll see Veeam break the requirement to have a virtual lab to perform backup scanning for viruses, health checks etc. This is most impressive when we see how Veeam handles the ‘find my last clean backup’. If we consider a scenario whereby there’s 7 months of backups to check, Veeam will check the middle backup, they called this ‘binary split’. The backup will be clean or not, and that immediately removes 50% of the backups they need to check, if it’s clean, all the points older than this will be clean, if it’s not, all the points newer than this won’t be. And it repeats this process to efficiently cut through a large volume of backups into a rapid verification effort.
Finally we get to Malware events, within the UI’s inventory view when a malware detection event occurs, you can mark a suspicious/malicious backup as ‘clean’. I know from the resiliency summit that Veeam have a ‘Four Eyes Authorization’ mechanism coming in v12.1 so I asked but currently marking as clean does not require this authorization, though I hope this changes in the future.
Then we stick with Egor & Dima for a bit longer, as we look into v12.1 security enhancements as a whole!
Veeam have a few key goals here, they want to tackle the bad practices of bad entropy passwords, whether through lack of complexity or length etc. And also that most people don’t rotate backup encryption passwords frequently.
Veeam will integrate into KMS systems to resolve both of these issues, with strong keys that are automatically rotated. This integration is the Key Management Interoperability Protocol, aka KMIP.
Veeam have always been focused on the recovery aspect, so it comes as no surprise that Veeam don’t require the same KMS server to present the keys for recovery, in the event of a complete disaster, any KMS server that has the private key can be used to recover the backup keys.
A key point to raise here is that Veeam still utilise their own backup encryption keys exactly as before, storing these in their configuration database, but should this database be lost, you can then recover the keys from a KMS server. This also plays nicely with VBEM, if you utilise VBEM for key recovery, this mechanism can be leveraged before a KMS key recovery request.
Veeam are working with their alliance partners to validate readiness and supportability of features, so far Thales CipherTrust Manager, Fortanix Data Security Manager KMS and IBM Security Guardium Key Lifecycle Manager (GKLM) are supported, but as a minimum you will require a KMS server with KMIP version 1.2+. You can also leverage FQDN cluster names for resiliency across multiple nodes.
And I can’t overstate this, loss of the backup key and VBEM and KMS means you can not recover your backups!
This feature does have a license requirement of Socket-based Enterprise Plus, or Veeam Universal License is required. You require either a minimum of 2x admin accounts or an admin group to be defined, personally I would always rather define explicit accounts for this.
Once this is enabled, any backup deletions or changes to MFA as examples will require a second administrator to approve prior to being actioned, and if there isn’t a second approval within a specific time window, then the pending request is automatically rejected and cleaned up. This feature will also send an email report of changes, as well as writing to Windows events. But the emails are slightly delayed by 5-10 seconds so that if multiple sequential changes are actioned then the email will be cumulative change email as opposed to spamming emails per action, a nice touch! I hope to see this feature extend in the future to include things such as deleting encryption keys, lowering retention, and changing encryption key passwords.
A dramatic uplift in the number of checks it performs, from 9 to 30* (as multiple independent checks are rolled into overall compliance checks that are related to each other), and it can now be scheduled to run daily with an email report to help capture deviations from best practices.
Out of the box, Veeam have already tested with Splunk, SolarWinds, Syslog-ng, PRTG, Rsyslog, and Nagios! And is RFC5424 compliant. A lot of work has been done behind the scenes to improve the Windows events that mirror this and ensure every event code is unique for better troubleshooting.
As you’d expect, Syslog supports UDP, TCP, and TLS. But currently it does not cache any events, even if using a protocol such as TCP & TLS that would understand if the syslog server was unreachable, which I hope we’ll see fixed in the future!
Next up, a fast paced session from
This was a rapid fire session, covering some key points.
Firstly, something I’d never considered was defining different immutability policies to different extents within a SOBR, but it turns out you can. Not for much longer though! As of v12.1 the upgrade will block if you attempt to upgrade with a difference in the immutability period for your repositories in your SOBRs. I’m glad this is an upgrade block instead of a post-installation issue!
We’ll see Dell DataDomain Retention Lock Support via DDBoost, this will require compliance mode and automatic retention lock to be switched off.
HPE StoreOnce Catalyst Copy Immutability is now supported (Catalyst had immutability support in v12 but was missing the Catalyst Copy feature!)
Service Providers are going to become more flexible within the Object Lock functionality space, with support for Object Lock Governance Mode. As you can’t change this setting on your repositories once provisioned within VBR this will require transitioning to new repositories.
We’re also going to see immutability support for configuration backups IF YOU USE OBJECT STORAGE! We’ll see Hardened Repo support for this ‘in the future’.
And to round up the session, we’ll see time step detection reporting with the Hardened repository in v12.1
Changing gears we now have Petr Makarov discussing all the Enterprise Application Improvements (THERE ARE A LOT OF THESE!!!)
Here’s what we can expect from V12.1!
Phew, that was a lot to take in, and it’s still going!
Great posting today
Hannes is back on stage now to talk about all the amazing improvements to CDP!
First up a topic very dear to my heart, SUREREPLICA SUPPORT FOR CDP! This will also support YARA, antivirus, CRC checks etc. It should be noted that there are some limitations still, such as no long-term restore points will be created during a surereplica job run, and no retention processing will be actioned during this time. This leads us onto File Level recoveries, the same constraints apply but with FLR you’ll be able to step through every restore point to perform an FLR against any of those low RPO CDP restore points!
We’ll be able to protect more VMs than ever, with support for up to 7000 disks per VBR instance, up from 2000!
You can also utilise planned failover with CDP, ensuring zero data loss, the VM gets powered off before the failover, handy for power outages etc.
Another awesome feature is the ability to change the disk type used by the CDP target, so if you’re using a thick disk on your production (maybe your storage supports thin datastores with thick disks) but your CDP target doesn’t, you can set these to thin at the target and enjoy space savings! Just one example, it’s flexible either way!
Also Tim Smith gave a good shout out around CDP in general, and to avoid RPOs higher than 15 minutes, but definitely more than 30 minutes, it costs more RAM and will more likely result in caching to disk, it’s heavier on the network & proxies, utilise replication if you want these RPOs.
Nice write-up of all Day 1 features
Apologies for the pause in coverage for day one, I was writing these up in breaks between sessions and then I didn’t get any further break time! Now I’m going to back fill this information.
After Hannes’ session on CDP we dived into AIX, Linux, MacOS, and Solaris Agent news, with Hannes’ being joined by
We’re going to see a very welcome improvement on AIX & Solaris Agent management with the ability to deploy, update, and remove the agent. I don’t yet know if we’ll be able to get VBR to ‘adopt’ any existing Agents to take over server management, or whether we’d need to perform a reinstall with a management package, and what that would do to any existing backup chains.
Additionally AIX & Solaris jobs can be started & stopped via VBR, no need for cron to be leveraged for scheduling either as they now have their own built-in schedulers, and bare metal recovery has recovery token support.
AIX, MacOS, and Solaris will all get support for GFS on standalone centrally managed agents, though this will require active full backups, with one important caveat that this is true in all scenarios except MacOS to object storage where synthetic is to be supported if I understood the comment right.
Additionally, the following new features have been announced for AIX:
Solaris gains support for reconnection after network outages, and ZFS compression support.
AIX & Solaris both gain the ability to exclude directories during a bare metal restore, and offer a simplified restore process via the local recovery console in these next releases.
Yep, it’s happening!
It’s going to get a great number of features out of the box, including:
Protection groups are currently only supported via pre-installed agent, and will be limited to job status visibility. Indexing & Search will be possible via Enterprise Manager
Next we moved to a rapid fire VBR features session with Egor!
Enterprise Manager is going to see the following updates:
Following Egor was Dima, talking about the new Object Storage backup functionality. This doesn’t mean using Object Storage as a target, but as a source!
We’re going to see the file share section within inventory change to ‘Unstructured Data’ with two sub sections for “Files Shares” and “Object Storage” respectively. You simply select unstructured data > Add > Object storage, specify whether it’s AWS S3, S3 Compatible, or Azure Blob, and then populate the wizard as normal.
Dima highlighted that this is a dedicated job type, it’s not just a ‘file-share’ job with NAS & Object Storage in, you can’t mix these workloads within a single job. It is also supported to perform Object Storage Backup to Tape.
I still have a lot of questions about this, with regards to whether the proxies will be the normal ‘Agent’ proxies, and especially around any design considerations this will cause us to think about, but I think it’s a promising feature.
The conversation then transitioned to a general unstructured data conversation, and we heard about the upcoming NAS integration & File Share improvements:
These all require utilising the storage integration to work
Finishing this section we also saw some tape announcements, namely DFS Backup in File to Tape, and support for IBM 3592 also known as Jaguar Tape. A key point to note is that you can’t mix Jaguar/IBM 3592 tapes with LTO tapes within the same media pool.
Whew, onto the last session of the day. VEEAM RECOVERY ORCHESTRATOR 7!!! 🥳
Emilee Tellez & Alec (the) King are on stage to take us through the new edition of VRO and what this means for us.
The question was asked of ‘What about AWS’, and nope, AWS restoration still isn’t supported yet. I personally hope that we’ll start to see VRO start to orchestrate recoveries of the Veeam Backup for Public Cloud products, whilst ‘cross-cloud’ could be a pain, the ability to orchestrator and automate even a ‘within hyperscaler’ recovery would be a great starting point. We advocate to have a tenant silo between production and backup data, but this means there’s a huge amount of resources to build during a DR from those backups.
Really great recaps.
Enjoyed these Michael. Fills in the gaps of what I missed!
No mention of the highlight of the summit …… the toast???!?!? pfft
No mention of the highlight of the summit …… the toast???!?!? pfft
I was too busy sampling the buffet to get a picture! Someone please share it!
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.