Skip to main content
Solved

Free Starter License maximum nodes decreased


Hey all, 

small question: I have seen in the release notes of v5.0 that the free starter license we are using, will be decreased from 10 to 5 nodes. Will this also affect my starter license enabled in my k10 deployment? I am afraid, that I cant backup my 6 nodes cluster after that upgrade anymore. 

Thanks a lot. 

 

EDIT: Seems like veeam k10 doesnt count the master nodes that cant schedule pods, is this right? 

Best answer by dk-do

Thats right. In v5 the max. number of nodes for the free license are 5 instead of 10.

Master nodes are not counted.

View original
Did this topic help you find an answer to your question?

11 comments

JMeixner
Forum|alt.badge.img+17
  • Veeam Vanguard
  • 2650 comments
  • June 7, 2022

I think, as soon as you upgrade to V5 you are allowed to backup 5 nodes only…

With the upgrade to VBR 11 the amount of data per instance for NAS backups was automatically adjusted from 250 to 500GB. So, I am afraid this will be case with this change, too...


  • Comes here often
  • 14 comments
  • Answer
  • June 7, 2022

Thats right. In v5 the max. number of nodes for the free license are 5 instead of 10.

Master nodes are not counted.


  • Author
  • Comes here often
  • 19 comments
  • June 7, 2022

Thanks for your response @JMeixner and @dk-do !

Is there any reference in the docs I didn’t find, that gives more detail over the licensing and counting of nodes? 

Thank you!


  • Comes here often
  • 14 comments
  • June 7, 2022

Here you can find the info that the “forever-free” version contains 5 nodes.

For licensing details you can use the form for the enterprise trial:

 

https://www.kasten.io/free-kubernetes


JMeixner
Forum|alt.badge.img+17
  • Veeam Vanguard
  • 2650 comments
  • June 7, 2022

The 5 node limit of the starter version is mentioned on the site @dk-do linked.
But I cannot find a reference for not counting the master node….


  • Comes here often
  • 14 comments
  • June 7, 2022

The master or control plane nodes do not hold any data to backup and they cannot be scheduled for running pods/containers. So those are not counted.


JMeixner
Forum|alt.badge.img+17
  • Veeam Vanguard
  • 2650 comments
  • June 7, 2022

Yes, the logic is clear - and I was nearly sure that it is handled this way.
But a reference in the documentation about this would be fine. Perhaps it can be added in the future 😎


  • Author
  • Comes here often
  • 19 comments
  • June 7, 2022

Thanks a lot for your help, I mark this as solved. I would also appreciate an entry in the documentation. 


  • Not a newbie anymore
  • 3 comments
  • January 8, 2023

Does anyone know what the license count is specifically looking for when identifying master vs worker? We have an issue where our masters are being identified as a worker and thus consuming our license count.

We (well... rke2) tag the following on our masters:

node-role.kubernetes.io/etcd=true
node-role.kubernetes.io/master=true
node-role.kubernetes.io/control-plane=true

 


  • Not a newbie anymore
  • 1 comment
  • February 21, 2023
str-group wrote:

Does anyone know what the license count is specifically looking for when identifying master vs worker? We have an issue where our masters are being identified as a worker and thus consuming our license count.

We (well... rke2) tag the following on our masters:

node-role.kubernetes.io/etcd=true
node-role.kubernetes.io/master=true
node-role.kubernetes.io/control-plane=true

 

I want to know this. Kasten consuming our license for infra nodes. Infra nodes aren’t worker nodes.


str-group wrote:

Does anyone know what the license count is specifically looking for when identifying master vs worker? We have an issue where our masters are being identified as a worker and thus consuming our license count.

We (well... rke2) tag the following on our masters:

node-role.kubernetes.io/etcd=true
node-role.kubernetes.io/master=true
node-role.kubernetes.io/control-plane=true

 

Have the same issue. With RKE 2, my 3 masters are taken into license count..

 

edit: → Solution

kubectl cordon <node-name>

kubectl drains <node-name> 

kubectl taint nodes <node-name> node-role.kubernetes.io/master:NoSchedule

kubectl uncordon <node-name>


Comment