Veeam 100 Summit 2023 - Day 2


Userlevel 7
Badge +20

We start the day with @kirststoner12, Roman Kuksov, and @jorge.delacruz 

 

 

First up is Jorge talking us through the Veeam ONE Threat centre.

Jorge talks us through the data platform scorecard, made up of the following four contributing components:

  • Platform Security Compliance - This is driven by the VBR Security & Compliance Analyser (Formerly known as the best practice analyser)
  • Data Recovery Health - This score looks at backups marked as suspicious/infected, you should hopefully see a 100% health when you first deploy this
  • Data Protection Status - This score is based on the percentage of protected workloads vs unprotected
  • Backup Immutability Status - This score is based on the number of workloads that are not compliant with the immutability target.

Within each of these widget sections are call to actions for a related report to provide drilldown information on the appropriate topics

Each of these widgets have options to include/exclude settings for example:

  • Data Recovery Health can choose to include & exclude specific backup repositories
  • Data Protection status can filter to specific workload types such as VMs, Computer, Unstructured Data, Cloud Instances, and Enterprise Applications, but additionally against a globally defined RPO to be considered as ‘protected’ ensuring that a 6 month old backup doesn’t qualify as ‘protected’
  • Backup Immutability Status - This is very flexible, includes not only the ability to filter resources, but also define a minimum immutability retention policy to align infrastructure against.

There is a world map and you can assign locations to backup repositories tied against a geo-search for cities across the globe!

 

The threat centre also contains an RPO anomalies widget to show up to the top 30 anomalies for RPO within your organisation, and an SLA compliance heatmap for individual workload success vs failure backups and you can define an SLA target against. The SLA compliance heatmap can be set to reflect up to the past 180 days for SLA compliance.

 

Veeam ONE Client

There are new reports for unstructured data compliance (NAS + Object Storage), you can look at compliance against RPO, but also compliance against protected data percentages. For example a backup job that is successful but only protects 3% of the files, isn’t really a successful backup job.

 

Veeam Malware Detection

Over to Roman we’ve got multiple malware alarms appearing within Veeam ONE such as:

  • Veeam Malware Detection Activity State
    • This rule raises an alarm if malware detection is disabled within VBR
  • Potential malware in backups
    • Tracks infected, suspicious, and even marked as clean within the alarm. So even if marked as clean, Veeam ONE will still highlight this.
  • Potential infrastructure malware activity
    • Monitors for infrastructure activity of infected workloads, and we automatically, or by approval, the ability to disable VM network, run scripts, or switch VM network such as to an isolated network
  • Veeam malware detection exclusions change tracking
    • Tracks if any exclusions are defined for malware detection
  • Malware detection change tracking
    • Tracks if any changes have been made to Veeam settings for malware detection such as decreasing sensitivity to malware detection entropy

Another really cool feature is the ability to perform actions against your production environment based on alerts, examples provided were disconnecting the VM from network, migrating the VM’s network to an isolated network, or running a script.

Security & Compliance

Roman next talking about each security & compliance alarm and the flexibility of control over the rules if you needed to exclude specific best practice rules (please only do this if you have a good valid reason!)

The backup security & compliance reports are pretty cool, if you’ve got multiple VBR instances you can choose to group your reports by rules or per VBR server!

Enhanced Alarm Lifecycle

Over to Kirsten now and we’re hearing about the new ability to output alarms to ServiceNow and/or Syslog. Setup is extremely easy, we can add this via the server settings menu, with two new sections added to the hierarchy, one related to each of these. Alarms won’t be pushed out to syslog/ServiceNow by default, ensuring that only the alerts that you desire, are pushed to ServiceNow. Communication with ServiceNow is two way, meaning that resolving cases within ServiceNow can resolve the Veeam ONE alerts too.

 


10 comments

Userlevel 7
Badge +20

Next up we have a presentation by Veeam focused on Cirrus, with @Rick Vanover, @martynh and Tim Hudson talking about the Cirrus by Veeam product offering.

 

An interesting but welcome discovery is that Cirrus by Veeam utilises the same binaries and features available to VB365 whether you roll your own deployment, or leverage a service provider. They’ve just leveraged a lot of the API functionalities to achieve brilliant results, such as a truly granular RBAC.

 

This was a demo heavy session, but seeing that you could create a new tenant and be backing up in under 10 minutes was a fantastic experience to see.

 

During the Q&A I asked what size scales we should expect for customers that wish to adopt Cirrus, as it’s multi-tenant would there be such a thing as ‘too big’, and I was assured that if VB365 can handle it, Cirrus can handle it. A bold statement that I look forward to seeing proven.

Userlevel 7
Badge +20

Next up we jump over to an all-star cast of presenters to discuss VB365 v8, what’s new!

Polina Vasileva, Benedikt Däumling, @Rin, and @MikeResseler had a lot of content to go through, much of it not repeatable, so this will be a quick summary post. What I can share is that we’re going to see a massive architectural simplification in v8, and the scalability is going to dramatically increase.

We’re going to see a shift towards PostgreSQL from SQLite, and this will replace the concept of local cache for object storage repositories, a welcome addition. I personally see this being a good time to also look into PostgreSQL high availability options to ensure that your highly available object storage can be met with a highly available database.

For those that love Linux, we’re going to see the addition of Linux proxies (only for object storage!) with support for RHEL & Ubuntu in v8. Again if you’re using object storage you can also shift to leveraging a proxy pool, this is an exciting improvement as jobs can persist across backup proxies going into maintenance mode. And we’re going to see VB365 automagically load balance the workloads between the proxies. This will certainly allow VB365 to scale to higher user & object counts, but this doesn’t tackle the underlying issues of M365 throttling by Microsoft, but it’s great to see VB365 scaling well!

Userlevel 7
Badge +20

Another excellent summary Michael. 👍

Userlevel 7
Badge +7

Great job with these recaps, @MicoolPaul !

Userlevel 7
Badge +20

After a lunch break we’re keeping in the SaaS world with a discussion on Veeam Backup for Salesforce, with Andrey Zhelezko & Maxim Ivanov taking to the stage.

 

 

There isn’t much I can share here as a lot of the content talking about vNext is restricted content, and a good chunk of the session was focused on discussing how the product works, the architecture, and what features have been released to date. Of which this is captured in the Veeam User Guides and Helpcenter, so there’s not much point re-inventing the wheel. But I did discover a few interesting things:

  • Veeam Backup for Salesforce supports Salesforce OpenID + MFA now in v2, meaning the amount of authentication options dramatically expands
  • Veeam Backup for Salesforce can be installed on Ubuntu as of v2
  • Veeam Backup for Salesforce can be deployed on-prem or in AWS/Azure/GCP, vs Salesforce’s native backup tool which can only be deployed on AWS. And a dramatic difference to a lot of the SaaS backups that have no ‘roll your own’ options.
Userlevel 7
Badge +20

Just like Day One, I didn’t get a chance to finish writing up some of my Day Two content, so here we go!

After Veeam Backup for Salesforce, we saw Hannes take to the stage to show some extra VBR v12.1 content that had to be cut due to time constraints on day one. There were a few key topics here:

Object Storage (as a Backup Target)

Veeam intend to improve the health checks of object storage on the capacity tier by checking for the existence of blocks, rather than the content, due to the levels of data durability offered by object storage this is cheaper and fast.

We’ll also see improvements to Object Storage deletions, and within rescan/import scenarios. This is critically useful when you’re attempting to build a new VBR installation for example during a DR and you need to import the object storage account.

We’ll see the following new Object Storage types supported:

  • Google Coldline Storage: 90 day minimum retention, adds retrieval fees, same concept as AWS S3 “Infrequent Access”
  • Azure Cold Tier: An ‘online’ (not tape) archive tier with minimum 90 day retention.
  • On-prem archive tier support (Via SOSAPI compatible systems): This provides cost savings with tape, PoINT confirmed for launch, Quantum & SpectraLogic TBC.

Azure will also see Object Storage integration improvements by supporting Azure AD / Entra ID authentication, meaning an Entra ID account / application registration can be utilised as the authentication method for Object Storage. Account + Shared Key support will remain

 

PowerShell/Rest API

A lot of added support for new functionality within PowerShell to interact with the new features. It’s also been highlighted that some Cmdlets are deprecated as they referred to things such as ‘NAS Server’ which is now ‘Unstructured Server’.

We also saw Veeam recommit to aiming for Rest API feature parity against the UI & PowerShell.

 

Veeam Agent for Linux (x86)

Finally, Hannes talked about what’s new in the Veeam Agent for Linux (x86).

A key addition is the ability to leverage certificate-based authentication, to enable sign in without credentials. Veeam have also progressed their ‘experimental support’ for Veeam Agent for Linux without the kernel module, this is now fully supported.

Without the kernel module, changed block tracking isn’t supported, and this mode must currently be deployed by choosing the “no-snap” packages and manually deploying these to the servers required, this mandates that the Veeam Agents being installed will be in either a standalone mode, or within the pre-installed agents protection group.

Userlevel 7
Badge +20

Great recaps Michael.

Userlevel 7
Badge +7

These are really nice summaries. Love them. @MicoolPaul you are the best always.

Userlevel 7
Badge +20

 

Following on from Hannes and a quick break, we saw @Rick Vanover, @Viperian and @ddomask all sharing the stage to give an incredibly useful session on ransomware, from multiple angles. I’ll start off by saying that @ddomask’s content was event exclusive so I won’t comment further on that, but personally a massive thank you to him for delivering that session as it was fantastic 👏

Rick set the scene by talking about his top 10 worst practices being seen in the field, and how to flip these into a top 10 best practices.

  1. Immutability - No surprises here, immutable backups are one of the best defences available against the loss of your backups in a cyber attack
  2. Encryption Password Not Forgotten - Ensuring there isn’t a reliance on the survivability of VBR or VBEM during a cyber incident, these servers could also be attacked, in this scenario it is essential to have the backup encryption password
  3. Restore testing - Being aware of the different ways to restore data in different scenarios is one of the best ways to achieve the lowest recovery times, with the minimum of data loss.
  4. Performant Backup Storage - Rick focused on recovery performance, asking an important question ‘what happens if you need to restore EVERYTHING?’ How would your backup infrastructure cope?
  5. Explicit Credentials - At a minimum, backup repositories having explicit credentials to minimize the possibility of access, but the more restricted the permission scopes per access account the better, instead of having a single ‘God mode’ service account
  6. Readiness for clouds - Being sure of how to recover to the cloud ‘properly’ was the justification of this point. Whilst it’s trivial to perform a restore to Azure/AWS/GCP, what does security and networking look like in this scenario as examples. The goal is to avoid having your DR response creating additional security headaches.
  7. Veeam ONE - An interesting recommendation that I haven’t heard before here, deploy Veeam ONE as a standalone machine, think of it as ‘on an island’, let it be left alone handling events and actions.
  8. MFA - Rick goes one step further here than just MFA on the VBR console, you should still look at MFA to gain access to the entire server(s). Once you’re on a VBR server, it’s still possible to do a lot of damage without gaining VBR console access.
  9. Have a plan - Exactly what it says on the tin, failing to plan is planning to fail. Even if your plan doesn’t cover your exact scenario you find yourself within, the ability to leverage decisions already made provides a great template to align yourself against.
  10. Everyone gets training & threat intelligence - Educating and empowering users is important, and ensuring that your IT team gets threat intelligence is also incredibly useful.

With the list out of the way, we then pivoted to the extremely experienced ‘explorer’, Mr @Viperian!

Edwin hit us with some hard truths straight away. “Hackers don’t hack, they log in”. Edwin also put some useful spins on common phrases such as how IT Security need to be right every time, hackers just need to win once, and highlighting how once a hacker is within your network, the scenario flips, the hacker needs to be stealthy constantly, otherwise IT Security will notice and kick them out.

Edwin then laid out what an attack scenario can look like, echoing a point I’ve long made, ‘ransomware doesn’t just appear, someone has to put it there’.

Edwin provided a high level view of the sheer volume of hacking tools available readily to anyone with a slight interest in the topic, it was overwhelming!

Edwin wrapped up his segment discussing the importance of some of the new features in v12.1 such as YARA rules and automated malware & content scans.

 

After this we moved onto some event exclusive content for the rest of the day, and this brings my coverage of Day two to a close!

Userlevel 5
Badge +3

I’ll start off by saying that@ddomask’s content was event exclusive so I won’t comment further on that, but personally a massive thank you to him for delivering that session as it was fantastic 👏

Glad you enjoyed it @MicoolPaul! I appreciate the kind words, but i was just the messenger -- big credit to our Critical Incidents Team (SWAT) for helping to prepare it, and special thanks to Sergey Denichenko (Critical Incidents Team Manager) for their review and assistance!

Hopefully if everyone follows Rick’s advice and uses all the new features in v12 and v12.1 for ransomware protection/detection, nothing I talked about will be needed ;)

Comment