Policies and Profiles are critical tools for data management within Veeam Kasten. Policies define the rules and behaviors for backup, recovery, and other data protection actions by specifying what actions to take, when, and under what conditions. Profiles define access to external resources, either by enabling direct integration with primary storage for orchestrating storage snapshots (Infrastructure Profiles) or by specifying secondary storage locations and credentials for exporting backup data (Location Profiles).
Picture This!
Your Kubernetes environment runs effortlessly, with a seamless data management system that works quietly in the background, ensuring every piece of critical information is protected and recoverable. Backups occur automatically without disrupting your operations, and data is always available precisely when and where you need it. Your team no longer must worry about compliance or recovery failures. Everything is set up to scale with your business needs, adapt to any changes, and provide peace of mind that your data is secure, resilient, and accessible at a moment’s notice.
This is what an ideal Kasten deployment looks like: a system that works so well that you hardly notice it’s there.

The 3-2-1-1-0 rule is a best-practice framework by Veeam for data protection that ensures reliability and resilience. It emphasizes maintaining multiple copies of data across different storage types and locations, incorporating immutability to safeguard against tampering, and verifying backups to eliminate errors. This creates a strong foundation for data security and recovery in any environment.
Understanding Storage in Kasten
To ensure a reliable and efficient data management system, it's essential to understand how storage is configured and utilized within Kasten deployments. Two key storage management components in Kasten are primary storage and location profile storage, each serving distinct but complementary purposes. Primary storage focuses on immediate, high-performance access to data, while location profile storage enables flexibility and scalability by defining where and how backups are stored.
Primary Storage | Location Profile Storage |
Primary storage in Kasten refers to the storage consumed by Kubernetes workloads, where active, real-time application data resides. Kasten integrates with this storage to perform snapshot operations, creating local RestorePoints that capture the application state at a specific moment. These snapshots allow for fast, near-instant restores directly within the cluster.
It’s important to note that snapshots are not backups. In contrast, snapshots provide a quick recovery option; backups are created when snapshot data is exported to an external location defined by a Location Profile. This dual approach enables Kasten to support both fast local restores and more robust, off-cluster disaster recovery, ensuring operational continuity and data security.
Use Local Snapshots When:
Use Exported Backups When:
| Location profile storage allows you to define where backups are stored, offering flexibility to use various storage targets, including cloud, on-premises, or hybrid solutions. This storage type is crucial for disaster recovery, long-term retention, and ensuring data is accessible even during infrastructure failures.
Key Features:
Immutability ensures that backup data cannot be changed or deleted once saved in location profile storage. This is done using features like write-once and read-many (WORM) policies provided by the storage platform. By enabling immutability, Kasten protects backups from threats like ransomware, accidental deletions, or unauthorized changes, ensuring the data stays secure and reliable for recovery when needed.
|
Primary and location profile storage complement each other in a Kasten deployment. While primary storage ensures speed and accessibility for immediate needs, location profile storage enhances resilience by providing redundancy and enabling offsite storage. Together, they support a comprehensive data protection strategy that balances performance, flexibility, and compliance.
Securing Access and Managing Permissions
When deploying Veeam Kasten, ensuring secure and reliable access to the Kasten Dashboard is crucial. Kasten offers multiple Kubernetes-native methods to expose the dashboard, including kubectl port forwarding, LoadBalancer services, Ingress, and Routes (specific to OpenShift). All provide different levels of accessibility and security. Proper configuration of these access methods facilitates seamless management of your data protection workflows.
Kasten supports various authentication mechanisms, such as Basic Authentication, Token Authentication, Active Directory, OpenShift Authentication, and OpenID Connect (OIDC). These options enable integration with identity providers like Google or Microsoft Entra ID for secure and centralized user authentication. Authorization is managed through Role-Based Access Control (RBAC), which defines the actions users are permitted to perform. By assigning appropriate roles and permissions, RBAC ensures that only authorized users can interact with critical resources, adding a vital layer of security to your deployment.
- K10-admins: Provides full access to all Veeam Kasten features for administrators.
- K10-basic: Grants users permission to backup, restore, and view applications in namespaces they have access to.
- K10-config-view: Allows users a read-only view Veeam Kasten configuration details (profile, policies, etc) on the dashboard.
In Kasten, roles are assigned through Kubernetes' RBAC to manage permissions and access effectively. ClusterRoles define permissions that apply across the entire cluster, granting access to cluster-wide resources. In situations that require more granular control, roles are limited to specific namespaces, allowing permissions to be applied only within a defined scope.
To associate these roles with users or groups, ClusterRoleBindings links ClusterRoles to users or groups for cluster-wide access. At the same time, RoleBindings connect roles to users or groups within a specific namespace. This structure provides flexibility and precision, ensuring the right level of access is granted to the appropriate users or groups based on their responsibilities.
For more information on defined Veeam Kasten roles…
k10-admin:
· This role grants full access to everything Veeam Kasten can do.
· It's assigned to the k10:admins group by default.
· You can add users to this group for admin access.
k10-ns-admin:
· Provides access to secrets and config maps within the Veeam Kasten namespace.
· Assigned to the k10:admins group within the Veeam Kasten namespace by default.
k10-basic:
· Allows users to backup, restore, and view applications in assigned namespaces.
· Assigned to a RoleBinding within the relevant namespace(s).
k10-basic-config:
· Grants access to specific profiles or blueprints in the Veeam Kasten namespace.
· Assigned to a RoleBinding within the Veeam Kasten namespace.
k10-config-view:
· Allows users to view Veeam Kasten configuration details on the dashboard.
· Assigned to a ClusterRoleBinding by default (cluster-wide access).
Managing Resource Consumption and Concurrency
As an organization grows, Kasten’s operations, like backups, restores, and exports or imports, begin to compete for cluster resources. Without proper management, backup operations inevitably slow down. Sometimes, this causes backups to fail, leaving your systems unprotected. To avoid these risks, it becomes crucial to understand how Kasten uses CPU, memory, and storage to figure out how to configure its resource consumption and concurrency settings.
Though the recommended resource requests and limits provided typically suit most clusters, actual requirements may vary based on factors like the scale of your cluster and applications, total data volume, file size distribution, and the rate of data changes. Tools like Prometheus or Kubernetes Vertical Pod Autoscaling (VPA) (with updates disabled) can help monitor and fine-tune specific resource needs for your organization.
Let’s review the requirement types below…
- Base Requirements: Core resources needed for Kasten’s internal scheduling and cleanup services. These requirements are mostly static and are influenced by monitoring and catalog scale, but they do not significantly grow with the number of applications or Kubernetes resources being protected.
- Disaster Recovery (DR): Resources used for compressing, deduplicating, encrypting, and transferring the Kasten catalog to object storage during DR operations. These requirements are dynamic, scaling down to zero when no DR tasks are running, and additional resources can speed up these processes.
- Backup Requirements: Resources are required to transfer data from volume snapshots to object or NFS storage. These needs depend on data churn, file system layout, and total data volume, but they remain within a manageable range.
Managing resource consumption in Kasten is crucial for maintaining a stable and efficient Kubernetes environment. By following a structured approach, you can ensure that Kasten’s operations run smoothly without overloading your cluster or disrupting critical workloads. The following process will guide you through the key steps to optimize resource usage, from understanding requirements to scaling and monitoring effectively.
Step 1: Configure Resource Requests and Limits
Use Kubernetes to set resource requests (minimum resources guaranteed for a pod) and limits (maximum resources a pod can consume) for Kasten services and jobs. Proper configuration ensures that Kasten runs efficiently without monopolizing cluster resources, maintaining stability across all workloads.
How to Do It:
- Start with recommendations from the Kasten documentation for CPU and memory allocations.
- Adjust based on your cluster's available resources and operational needs.
Step 2: Monitor Resource Usage
Continuously track Kasten’s resource consumption using monitoring tools like Prometheus, Grafana, or Kubernetes dashboards. Monitoring helps identify bottlenecks, avoid overuse, and optimize configurations over time.
What to Watch For:
- CPU and memory spikes during intensive operations like backups or restores.
- Persistent high usage that may indicate the need for adjustments.
Step 3: Scale Resources as Needed
Periodically reassess and adjust resource allocations to match the growth of your cluster, the number of applications, and the volume of data being protected. Scaling resources ensures Kasten keeps up with increasing demands without compromising efficiency or reliability.
Best Practice:
-
Increase resource requests and limits gradually, testing performance after each adjustment.
Step 4: Optimize Concurrency
Concurrency refers to how many tasks Kasten can execute simultaneously. Higher concurrency can speed up operations but requires more cluster resources, while lower concurrency conserves resources at the cost of longer operation times. Configure the number of concurrent operations (e.g., backups, restores) Kasten can perform to balance speed and resource usage. Optimizing concurrency prevents excessive resource use and ensures stable operations.
How to Do It:
-
Start with conservative settings and increase concurrency gradually based on cluster performance and available resources.
Modern Kubernetes environments require managing configurations declaratively to avoid inconsistencies or inefficiencies that can disrupt operations at scale. GitOps brings this to life by using Git as the single source of truth for configurations. In Kasten, GitOps allows users to automate and simplify the management of backup policies and recovery plans to ensure these settings are applied consistently across clusters. Integrating GitOps can help organizations streamline their operations and improve collaboration, all while maintaining complete control over their data protection workflows.
GitOps in Kubernetes and Kasten
GitOps integrates with Kasten to manage data protection workflows declaratively. In a GitOps-driven environment, all configurations, like backup policies, location profiles, and disaster recovery plans, are stored and versioned in a Git repository. This allows Kasten users to:
- Track Changes: Every change to a configuration is logged, making it easy to audit or roll back to previous versions if needed.
-
Automate Deployment: Tools like ArgoCD or Flux synchronize the configurations from the Git repository to the Kubernetes cluster, automatically applying updates without manual intervention.
-
Ensure Consistency: By maintaining a single source of truth in Git, GitOps ensures that the same policies and settings are consistently applied across all clusters.
Using Git as the single source of truth, GitOps eliminates the need for manual configuration, reduces errors, and ensures consistency across clusters. Making it a game-changer for managing Kasten in Kubernetes. Organizations are able to streamline their workflows and improve their scalability. Here’s why it matters…
Simplifies Configuration Management
Instead of manually applying backup policies, location profiles, and recovery plans across multiple clusters, users define these configurations once in Git. GitOps automates the application of these settings, ensuring they are implemented consistently and accurately.
Version Control
With Git as the single source of truth, all changes to configurations are versioned. This allows you to track updates, audit changes, and roll back to previous versions if a new configuration causes issues, ensuring operational reliability.
Collaboration and Transparency
Git provides a collaborative platform where team members can propose, review, and update configurations. This enhances teamwork and ensures everyone has visibility into the changes being made, reducing the chances of misconfigurations.
Scaling Across Clusters
In multi-cluster environments, manually maintaining consistency is time-consuming and error-prone. GitOps automates this process, applying the same configurations across all clusters, saving time and ensuring uniformity at scale.