I need to export my Kasten data.. but where? Enter Minio Operator!


Userlevel 7
Badge +22

Many of you will know by now that saving your backups locally is a bad idea. In fact you might even say it is only a half-backed backup if only located at source.  For full protection you need to have another copy or an export. In Kasten exports are essential as well. 

But what if you are just starting out and don’t want to pay $$$ for AWS S3 or Azure? Creating a small scale S3 setup is not hard but what if you need multiple instances and multi-tenancy? 

In this case the Minio operator is your friend. 

To install the Minio Operator and its crds you need to leverage the Krew Plugin manager: https://krew.sigs.k8s.io/

The install command can be performed in one big gulp 

(
set -x; cd "$(mktemp -d)" &&
OS="$(uname | tr '[:upper:]' '[:lower:]')" &&
ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" &&
KREW="krew-${OS}_${ARCH}" &&
curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" &&
tar zxvf "${KREW}.tar.gz" &&
./"${KREW}" install krew
)

Next update your path:

 

export PATH="${KREW_ROOT:-$HOME

Now we can update Krew:

 

kubectl krew update

 

For storage I have attached 4 raw vmware hard drives to my K3S nodes. 

 

Minio has its own directpv-csi that can be easily leveraged and also install with Krew:

 

kubectl krew install directpv

kubectl directpv install

kubectl directpv info

After the last command directpv will display my drives

 

 

We can now look and see what drives are in the system

kubectl directpv drives ls

Time to format the drives and hook them into directpv:

kubectl directpv drives format --drives /dev/sdb --nodes=k3s-worker1 --nodes=k3s-worker2 --nodes=k3s-worker3

 

To expose the service later on I will also deploy Metallb loadbalancer. In my K3S setup I disabled klipper and traefik since Metallb I found simpler to deploy:

 

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.10/config/manifests/metallb-native.yaml

Create a configmap configmap.yaml with some external IP’s that MetalLB will issue to your loadbalance type services, here is an example but you would need to use external IP’s that apply for your network:

 

apiVersion: v1 
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250

 

kubectl apply -f configmap.yaml

Now we are ready to deploy the Minio Operator:

 

kubectl krew install minio

​​​

Next we will initialize Minio:

kubectl minio init


​​​​

 

Now to login into our instance all we need is this command 

kubectl minio proxy

It will not only return the URL that we need but also the token that we can simply copy then paste into the web login:

 

Head over to a browser and open up on port 9090:
 

After pasting the Token in you get a very intuitive and straight forward interface.

 

 

Creating a Tenant is very easy and after choosing what you want you should be presented with this screen which has the address for the api endpoint and the tenant console:

 

 

When finalizing creation Minio will present you with the credentials that you should save right away:

 

 

The tenant can use these to login to the console that was visible in the screenshot before:
 

 

The tenant will then be let in to their segregated area to be able to administer their S3 storage:

 

 

 

WAIT A MINUTE!! That bucket is named K10???? 

 

You guessed it this is the destination for my K10 exports!

 

Lets go take a look, I will use kubectx to quickly change contexts on my mac and wander into the source K8S cluster where Kasten is busy doing its protections thing!

There is kasten

 

Lets check out the dashboard:

 

There it is minio, one of my Location profiles for export.

 

If we go back to Minio tenant console then we can even drill down and look at the data:

 

 

Interesting I wonder what that Kopia is?

 

That will be a topic for another blog coming up in the future.

 

That is all for now!

I think I will go back to my Minio Admin console and create a separate tenant for my capacity tier on my scale-out Veeam Repository.

Hey while I am at it I can turn on object lock.

Now that I think of it I will also create a tenant for my Veeam O365 backup repository 🙂.

As you can see once you have the Minio Operator up and running you get addicted to it very quickly!


3 comments

Userlevel 7
Badge +20

Thanks for sharing. Need to get up to speed on Kasten now so will spend lots of time in this part of the community.

Userlevel 7
Badge +17

Thank you for your post.

Need to catchup with Kasten, too 😎👍🏼

Userlevel 7
Badge +6

This is amazing content Geoff! Thanks for posting here.

Comment