Skip to main content

Hi,

 

Wanted to try and bring some of the magic of the Veeam 100 Summit to the Community Hub, as so much of Veeam’s success comes from the community!

 

I’ll be blogging here each day bringing some highlights from the summit.

 

Hope you enjoy it!

 

 

And GREAT to see you here this yr @MicoolPaul !


Day 0 Kicked off with an amazing Kasten K10 training session by none other than @michaelcade.

 

I was fortunate enough to have worked through the labs provided in a prior training session with Kasten, and took the time to instead improve my own notes on kubectl & Kasten’s capabilities.

Michael Cade made some great points, such as how we never got rid of ‘physical’ servers when we mass adopted virtualisation, so why do we keep thinking Kubernetes is Containers vs VMs, instead of Containers & VMs, and I truly agree this is a future that we’ll see. We still use physical firewalls for performance purposes even though they’ve been available virtually for many years, so why wouldn’t the same be true that we’ll architect solutions based on the best technology available at the time? Container, physical or VM?

Ignore the title header, it’s comparing the structures of a VM and a Container!

If you’re interested in learning more about Kasten, speak to your Veeam account manager or check out Kasten’s free training @ Free Kubernetes Training (kubecampus.io


Then Day One kicked off this morning with the introduction to the V100 summit

 

Hearing from Rick, Safiya, Madi, Nikola, Hannes and Fabian spending so much time talking about community and their community focused efforts is great to hear. I’ve yet to meet another vendor that is so ‘hyper focused’ on the community and the feedback it provides!


 

When Fabian took to the stage we saw the announcement of the ‘Veeam Early Access Program’.

If you’re wondering what this is, in summary:

  • It’s access to the Veeam product management team, meeting with them to discuss KEY new features within future releases
  • This is a commitment for the duration of beta through to release
  • Will be limited to 2-3 customers per feature
  • Veeam have already piloted this!

Great to finally meet you this year Michael.  Going to do a similar thing myself each day.


Next we jump over to a word from some amazing sponsors, first up we had Object First!

 

 

One of the key takeaways I found from this was understanding the benefits of the communication between the Object First Appliances and Veeam. By leveraging the SOS API and utilising Storage Access Controls, it enables Veeam to be fast and smart with data placement. For example:

  • Caching Controls get exposed to Veeam
  • Large jobs get broken into smarter entities for smart placement
  • This also means that you don’t have a central/master node or load balancer to manage all backup traffic, instead the available capacity of each node is presented to Veeam, enabling the backup agents/proxies to write directly to each node

Also got some interesting metrics on their performance:

  • Up to 1GB/s backup performance per node
  • Restore speeds of up to 500-600MB/s per node
  • Up to 20 Instant Recovery sessions per node

The Object First appliance is ‘out of the box’ hardened, what does this mean? Simply, it’s a hardened Linux OS with no root access, supporting S3 object lock running in compliance mode + MFA! This combines to make data destruction a major headache for malicious attacks, would probably be easier to break into the datacentre and physically destroy the device at this point!


Following Object First is Wasabi!

 

None other than Wasabi’s amazing @drews taking to the stage, which was exciting to see in itself!

 

Drew highlighted the ‘Wasabi’ advantages (yes plural!), I think by now most people are aware that Wasabi doesn’t charge egress fees or API calls vs ‘the big three’ hyperscalers everyone tends to think of, but above and beyond this, Drew discussed what’s happened over at Wasabi in the past year, which was a welcome recap to have!

 

 

In the case of features such as customized endpoint URLs, it was great to see Wasabi’s focused on not bleeding partners, by making this a one off fee to enable, and with their Wasabi Account Control Manager (WACM) product being free!


These are real cool for each session’s summaries. Good job @MicoolPaul 


Hijacking this thread a little bit Sir @MicoolPaul 

 

There is a strong focus on Security at this years Summit. 

  • Inline Malware Detection - With version 12.1, Veeam can now scan backups for potential and real ransomware infections
  • YARA integration with SureBackup to ‘hunt’ for specific information. This could be malware infections, or PII 
  • New SureBackup operation without the need to setup a isolated SureBackup environment
  • On demand Backup scanning to find clean / infected VM backups
  • An Incident API to trigger a instant backup if a malware alert is raised. 

Additionally, the chance to get to speak to the actual Developers working on this functionality is awesome. 

 


Fresh from a break we join Dima P & Egor Y on the topic of Malware Detection!

 

This was a brilliant section discussing the many ways that Veeam can help in the detection of malicious attack.

 

Firstly, there’s an online/inline scan that takes place when a backup is running, it’s utilising a tool called MAGIC to index blocks across the system and discover what has changed, and create a likeliness score of risk that is used to report suspicious restore points.

In addition to this MAGIC can report on high risk files such as .onion links and ransom notes as an immediate high risk trigger. A great reminder was shared that .onion doesn’t just mean ransomware, it could also mean data exfiltration.

This feature comes at a CPU cost, with expectations of a 25-30% CPU utilisation uplift. And the index used for restore point comparison being stored within the configuration database, instead of a file on the VBR server that can be directly tampered with.

It was also highlighted that the first backup after enabling this feature will require a full read of the production system (even though only an incremental backup is created) as the index needs to be built.

 

Next they discussed Guest Index Scanning, looking for suspicious files, excess changes or loss of trusted file types as examples of post-backup scanning as ways to determine an attack.

Guest Index scanning requires additional CPU & RAM on your backup infrastructure.

Guest Index scanning is supported for VMware & Hyper-v based Windows & Linux VMs, and Veeam Agent for Windows.

 

A shout out to @Viperian (Edwin) on some useful background on the question of ‘why’ should you do this vs an EDR/XDR tool. And the key point he made was that those tools have milliseconds to make decisions, but Veeam can take their time performing their reviews, allowing them to see a bigger picture. Both of these features are useful to help backup admins identify which backups are good, before restoration.

 

This topic was followed up with the Incident API, this API allows third-party solutions to task Veeam with actions in scenarios, a key one being trigger a backup if ransomware is detected/suspected. Anton Gostev discussed this himself at the VeeamON resiliency summit reminding people that although a backup might take a while after this instruction, a snapshot is created ‘instantly’ minimizing data loss.

 

Then we moved onto the Secure Restore & SureBackup relationship. YARA rules are going to be supported in v12.1, and these will be run on the Mount Server. Veeam recommended that you utilise the ‘scan backup content with an antivirus software’ option, until you know what you need a YARA rule to detect, then you can leverage the YARA scan for incident analysis/investigation.

 

Also in v12.1 we’ll see Veeam break the requirement to have a virtual lab to perform backup scanning for viruses, health checks etc. This is most impressive when we see how Veeam handles the ‘find my last clean backup’. If we consider a scenario whereby there’s 7 months of backups to check, Veeam will check the middle backup, they called this ‘binary split’. The backup will be clean or not, and that immediately removes 50% of the backups they need to check, if it’s clean, all the points older than this will be clean, if it’s not, all the points newer than this won’t be. And it repeats this process to efficiently cut through a large volume of backups into a rapid verification effort.

 

Finally we get to Malware events, within the UI’s inventory view when a malware detection event occurs, you can mark a suspicious/malicious backup as ‘clean’. I know from the resiliency summit that Veeam have a ‘Four Eyes Authorization’ mechanism coming in v12.1 so I asked but currently marking as clean does not require this authorization, though I hope this changes in the future.


Then we stick with Egor & Dima for a bit longer, as we look into v12.1 security enhancements as a whole!

 

First up is key management server support!

Veeam have a few key goals here, they want to tackle the bad practices of bad entropy passwords, whether through lack of complexity or length etc. And also that most people don’t rotate backup encryption passwords frequently.

Veeam will integrate into KMS systems to resolve both of these issues, with strong keys that are automatically rotated. This integration is the Key Management Interoperability Protocol, aka KMIP.

 

Veeam have always been focused on the recovery aspect, so it comes as no surprise that Veeam don’t require the same KMS server to present the keys for recovery, in the event of a complete disaster, any KMS server that has the private key can be used to recover the backup keys.

 

A key point to raise here is that Veeam still utilise their own backup encryption keys exactly as before, storing these in their configuration database, but should this database be lost, you can then recover the keys from a KMS server. This also plays nicely with VBEM, if you utilise VBEM for key recovery, this mechanism can be leveraged before a KMS key recovery request.

 

Veeam are working with their alliance partners to validate readiness and supportability of features, so far Thales CipherTrust Manager, Fortanix Data Security Manager KMS and IBM Security Guardium Key Lifecycle Manager (GKLM) are supported, but as a minimum you will require a KMS server with KMIP version 1.2+. You can also leverage FQDN cluster names for resiliency across multiple nodes.

And I can’t overstate this, loss of the backup key and VBEM and KMS means you can not recover your backups!

 

We then see a focus change to Four Eyes Authorization.

This feature does have a license requirement of Socket-based Enterprise Plus, or Veeam Universal License is required. You require either a minimum of 2x admin accounts or an admin group to be defined, personally I would always rather define explicit accounts for this.

 

Once this is enabled, any backup deletions or changes to MFA as examples will require a second administrator to approve prior to being actioned, and if there isn’t a second approval within a specific time window, then the pending request is automatically rejected and cleaned up. This feature will also send an email report of changes, as well as writing to Windows events. But the emails are slightly delayed by 5-10 seconds so that if multiple sequential changes are actioned then the email will be cumulative change email as opposed to spamming emails per action, a nice touch! I hope to see this feature extend in the future to include things such as deleting encryption keys, lowering retention, and changing encryption key passwords.

 

Afterwards we see the Security & Compliance Analyser, formerly known by the short lived name of ‘best practices analyser’.

A dramatic uplift in the number of checks it performs, from 9 to 30* (as multiple independent checks are rolled into overall compliance checks that are related to each other), and it can now be scheduled to run daily with an email report to help capture deviations from best practices.

 

To close off this session, we now shift to syslog integration!

Out of the box, Veeam have already tested with Splunk, SolarWinds, Syslog-ng, PRTG, Rsyslog, and Nagios! And is RFC5424 compliant. A lot of work has been done behind the scenes to improve the Windows events that mirror this and ensure every event code is unique for better troubleshooting.

As you’d expect, Syslog supports UDP, TCP, and TLS. But currently it does not cache any events, even if using a protocol such as TCP & TLS that would understand if the syslog server was unreachable, which I hope we’ll see fixed in the future!


Next up, a fast paced session from @HannesK discussing immutability improvements!

This was a rapid fire session, covering some key points.

 

Firstly, something I’d never considered was defining different immutability policies to different extents within a SOBR, but it turns out you can. Not for much longer though! As of v12.1 the upgrade will block if you attempt to upgrade with a difference in the immutability period for your repositories in your SOBRs. I’m glad this is an upgrade block instead of a post-installation issue!

We’ll see Dell DataDomain Retention Lock Support via DDBoost, this will require compliance mode and automatic retention lock to be switched off.

HPE StoreOnce Catalyst Copy Immutability is now supported (Catalyst had immutability support in v12 but was missing the Catalyst Copy feature!)

Service Providers are going to become more flexible within the Object Lock functionality space, with support for Object Lock Governance Mode. As you can’t change this setting on your repositories once provisioned within VBR this will require transitioning to new repositories.

We’re also going to see immutability support for configuration backups IF YOU USE OBJECT STORAGE! We’ll see Hardened Repo support for this ‘in the future’.

And to round up the session, we’ll see time step detection reporting with the Hardened repository in v12.1

 


Changing gears we now have Petr Makarov discussing all the Enterprise Application Improvements (THERE ARE A LOT OF THESE!!!)

 

Here’s what we can expect from V12.1!

  • Db2 database support
    • Db2 Versions supported: 10.5, 11.1, 11.5
    • Standard & Advanced editions are required, community edition ‘may’ work but untested/unsupported
    • Supports x86_64 & AIX processors
    • Circular logging can’t be used for Db2 log recovery, Archive logging must be configured on your Db2 instance.
    • Like the other Enterprise Plugins, you can’t leverage direct to object storage!
    • Immutability is supported, however it’s delayed by up to 24 hours to ensure that any archive logs that get appended can be done successfully before sealing the file.
    • Like other enterprise plugins such as MSSQL, no central management/rollout exists currently
  • SAP Hana on IBM Power will be supported!
    • This is a standalone plug-in, SAP Certified to run on IBM Power.
    • SLES is the only supported OS for this within VBR v12.1, but RHEL is on the roadmap
  • On a related note we’ll see a Veeam Explorer for SAP HANA in v12.1
    • This will support latest state, point-in-time restores, and even restoring to a different server
    • Requires a minimum of SAP HANA 2.0 SPS 02, this is a SAP limitation, not Veeam!
    • The communication between server and backup is going to be HTTP/HTTPS, but to leverage HTTPS you’ll require the installation of the SAP Common Crypto Library, available via SAP.
  • PostgreSQL will see Instant Recovery support
    • Latest State, Point-in-time, and Different server are all supported
    • Smart Switchover is another feature, this can be set to ‘auto’, ‘manual’, or ‘scheduled at:’
    • Instant recovery requires the same source & destination PostgreSQL version, and it is currently limited to the entire instance, instead of individual databases.
    • PostgreSQL is still not supported on Windows
  • PostgreSQL will also see Export functionality via the Veeam Explorer, can export to a local server or to a Linux host, this supports native PostgreSQL compression and works at a database level. These are pg_dump’s and can be restored with pg_restore.

Phew, that was a lot to take in, and it’s still going!


Great posting today @MicoolPaul 


Hannes is back on stage now to talk about all the amazing improvements to CDP!

First up a topic very dear to my heart, SUREREPLICA SUPPORT FOR CDP! This will also support YARA, antivirus, CRC checks etc. It should be noted that there are some limitations still, such as no long-term restore points will be created during a surereplica job run, and no retention processing will be actioned during this time. This leads us onto File Level recoveries, the same constraints apply but with FLR you’ll be able to step through every restore point to perform an FLR against any of those low RPO CDP restore points!

 

We’ll be able to protect more VMs than ever, with support for up to 7000 disks per VBR instance, up from 2000!

 

You can also utilise planned failover with CDP, ensuring zero data loss, the VM gets powered off before the failover, handy for power outages etc.

 

Another awesome feature is the ability to change the disk type used by the CDP target, so if you’re using a thick disk on your production (maybe your storage supports thin datastores with thick disks) but your CDP target doesn’t, you can set these to thin at the target and enjoy space savings! Just one example, it’s flexible either way!

Also Tim Smith gave a good shout out around CDP in general, and to avoid RPOs higher than 15 minutes, but definitely more than 30 minutes, it costs more RAM and will more likely result in caching to disk, it’s heavier on the network & proxies, utilise replication if you want these RPOs.


Nice write-up of all Day 1 features @MicoolPaul .


Apologies for the pause in coverage for day one, I was writing these up in breaks between sessions and then I didn’t get any further break time! Now I’m going to back fill this information.

 

After Hannes’ session on CDP we dived into AIX, Linux, MacOS, and Solaris Agent news, with Hannes’ being joined by @rovshanpashayev.

AIX/MacOS/Solaris

We’re going to see a very welcome improvement on AIX & Solaris Agent management with the ability to deploy, update, and remove the agent. I don’t yet know if we’ll be able to get VBR to ‘adopt’ any existing Agents to take over server management, or whether we’d need to perform a reinstall with a management package, and what that would do to any existing backup chains.

Additionally AIX & Solaris jobs can be started & stopped via VBR, no need for cron to be leveraged for scheduling either as they now have their own built-in schedulers, and bare metal recovery has recovery token support.

AIX, MacOS, and Solaris will all get support for GFS on standalone centrally managed agents, though this will require active full backups, with one important caveat that this is true in all scenarios except MacOS to object storage where synthetic is to be supported if I understood the comment right.

 

Additionally, the following new features have been announced for AIX:

  • Faster backups through support for hardware accelerated CRC
  • Recovery media in OVA format, which is a requirement for IBM Cloud

Solaris gains support for reconnection after network outages, and ZFS compression support.

AIX & Solaris both gain the ability to exclude directories during a bare metal restore, and offer a simplified restore process via the local recovery console in these next releases.

 

Veeam Agents for Linux on IBM Power

Yep, it’s happening!

It’s going to get a great number of features out of the box, including:

  • Recovery
    • File-level Recovery; to original or other hosts (including cross-platform!!!)
    • Disk Publish
    • Bare Metal Restore
    • Volume level restore
  • Block-Level Backup:
    • Volume-based or entire machine
    • Only snapshot-based (LVM & BTRFS file systems)
  • File-Level Backup:
    • Snapshot-based or snapshot-less
  • Application Aware Processing:
    • Only via pre/post-scripts currently
  • Backup Targets:
    • VBR Repos, including dedupe appliances & object storage
    • Shared Folders
    • Local Storage
    • No Cloud Connect support, at least in this release

Protection groups are currently only supported via pre-installed agent, and will be limited to job status visibility. Indexing & Search will be possible via Enterprise Manager


Next we moved to a rapid fire VBR features session with Egor!

 

  • SureBackup:
    • Support for NSX-T
    • Support for VM exclusions in jobs
    • Support for Random machine testing, this is a value you set for X number of VMs to be tested per job run, this is completely random so it doesn’t factor in how recently a VM has been tested
  • VBR installation:
    • Can now sign into your Veeam account during the installation and fetch your license file during installation
    • VBR installation now supports an unattended installation with an XML-based answer file, Veeam also provide examples out of the box
    • Likewise you can perform unattended configuration restores with an XML-based answer file, and again Veeam provide examples out of the box
  • Veeam AI Assistant will exist within the VBR console, requires internet access, connects to a private AI instance hosted by Veeam, exclusively trained on Veeam documentation, has multiple language support and the entire conversation is context-aware
  • VBR supports ‘smart’ copying of data when the source & destination are on the same physical host & storage, it can leverage file system-based moves instead of copying all the data.
  • A new option exists when adding a VBR server into VONE to provide access to embedded dashboards, providing analytic data dashboards to VBR such as the new threat centre dashboard.
  • The auto delete retention setting is now a JSON, so you can create your own advanced retention settings, stored at %INSTALLPATH%\Veeam\Backup and Replication\Backup\Config\Retention\AutodeleteRetention.json
  • The files node is now hidden for non-admins in VBR
  • The backup properties window now has search functionality
  • The password length helper will inform when the password is too short

Enterprise Manager is going to see the following updates:

  • Managed by Agent policies will be visible
  • Improved jobs filter list
  • Support for unstructured data jobs, sessions, and restores
  • Clone job works with any edition

Following Egor was Dima, talking about the new Object Storage backup functionality. This doesn’t mean using Object Storage as a target, but as a source!

We’re going to see the file share section within inventory change to ‘Unstructured Data’ with two sub sections for “Files Shares” and “Object Storage” respectively. You simply select unstructured data > Add > Object storage, specify whether it’s AWS S3, S3 Compatible, or Azure Blob, and then populate the wizard as normal.

Dima highlighted that this is a dedicated job type, it’s not just a ‘file-share’ job with NAS & Object Storage in, you can’t mix these workloads within a single job. It is also supported to perform Object Storage Backup to Tape.

 

I still have a lot of questions about this, with regards to whether the proxies will be the normal ‘Agent’ proxies, and especially around any design considerations this will cause us to think about, but I think it’s a promising feature.


The conversation then transitioned to a general unstructured data conversation, and we heard about the upcoming NAS integration & File Share improvements:

  • FlexGroup integration support
  • Isilon Clusters with SmartConnect
  • NetApp Clusters with Load Balancing

These all require utilising the storage integration to work

  • “Unstructured Data” File Level Recovery is now going to get a “Compare with production” feature, for both File-Share/NAS Backup & Object Storage
  • Include & Exclude Masks have been totally reworked, importing & exporting of masks is supported now and each mask can now have a context type, with examples of usage.

Finishing this section we also saw some tape announcements, namely DFS Backup in File to Tape, and support for IBM 3592 also known as Jaguar Tape. A key point to note is that you can’t mix Jaguar/IBM 3592 tapes with LTO tapes within the same media pool. 


Whew, onto the last session of the day. VEEAM RECOVERY ORCHESTRATOR 7!!! 🥳

 

Emilee Tellez & Alec (the) King are on stage to take us through the new edition of VRO and what this means for us.

  • Simplified deployment - This is a nice, clean, single installation media now. The Veeam Data Platform ISO is a single image containing VRO, VONE, and VBR & VBEM, with a single license file available to license all of these products.
  • On the topic of licensing, VONE is now a requirement for licensing as VONE Embedded doesn’t exist anymore! VRO connects to your ‘normal’ VONE. This does mean if you were licensed on VDP Foundation edition (no Veeam ONE), and were using VRO instance packs, this won’t work moving forwards, so speak to your Veeam partner/contact to discuss the upgrade path with Veeam.
  • All of the malware detection features we’ve seen within VBR are available within VRO, and these two products communicate with each other, if a restore point is suspicious or infected, whichever application discovers this, shares that information with the other. Important for ensuring that the newest ‘clean’ backup can be restored fast!
  • The Veeam Threats Dashboard is integrated into VRO.
  • Just like VBR, we’ve now got the ability to perform CDP plan testing, same caveats apply.
  • Support for custom script execution within Azure VM restores, leveraging the latest Azure APIs to initiate PowerShell within the restored VM.
  • Multiple UI Updates:
    • All references to Veeam Availability Orchestrator & Veeam Disaster Recovery Orchestrator are gone!!!
    • Improved scopes for RBAC and Inventory
    • Reporting has retention settings now and a single page to find reports/templates/subscriptions

 

The question was asked of ‘What about AWS’, and nope, AWS restoration still isn’t supported yet. I personally hope that we’ll start to see VRO start to orchestrate recoveries of the Veeam Backup for Public Cloud products, whilst ‘cross-cloud’ could be a pain, the ability to orchestrator and automate even a ‘within hyperscaler’ recovery would be a great starting point. We advocate to have a tenant silo between production and backup data, but this means there’s a huge amount of resources to build during a DR from those backups.


Really great recaps.


Enjoyed these Michael. Fills in the gaps of what I missed!


No mention of the highlight of the summit …… the toast???!?!? pfft


No mention of the highlight of the summit …… the toast???!?!? pfft

🤣 I was too busy sampling the buffet to get a picture! Someone please share it!


Comment