Object Storage: Secure your buckets

  • 29 November 2022
  • 5 comments
  • 796 views

Userlevel 5
Badge +2

Introduction

With backup direct to object being one of the most anticipated features in v12 AND with the ever-increasing number of cyber-threats, it is paramount to secure access to our buckets.

In this series of posts, I will explore v12's direct to object capabilities and offer some suggestions to batten down the hatches.

Note: For this post, I will use Wasabi and Minio as my Object Stores of choice. While some slight variations are expected, keep in mind that the same concepts should apply to any S3 compatible target.
 

Restrict bucket access with a simple User IAM policy

You can find the Amazon S3 Object Storage Permissions in the User Guide for VMware vSphere and kb3151 highlights the required steps.

Let's review what that looks like in Wasabi's console.

 

For this example, I will configure a simple S3 bucket without immutability.

 

Step 1: Create the policy as described in kb3151

This policy grants access to the <<orossisecurebucket01>> bucket to the identity it is associated with.

Step2: Create a user and attach the policy

Step3: Use that user's Access and Secret keys when declaring the Repository.

Note that while this is a first good step to secure access to the repository, it does not prevent access to that bucket from another identity.

 

Tighten bucket access with a bucket policy

In addition to the User policy, we can add a bucket policy restricting access to just that specified user.

 

Note that you must specify an aws:userid. Wasabi calls it an accound id.

 

Let's break down that policy a bit.

{

  "Version": "2012-10-17",

  "Statement": [

    {

      "Effect": "Deny",

      "Principal": {

        "AWS": "*"

      },

      "Action": "s3:*",

      "Resource": "arn:aws:s3:::orossisecurebucket04",

      "Condition": {

        "StringNotLike": {

          "aws:userid": [

            "10********25",

            "YB*****************6Y",

            "YB*****************6Y/*"

          ]

        }

      }

    }

  ]

}

 

In plain English, the above means:

Anyone ("AWS": "*") is denied entry ("Effect": "Deny") except ("StringNotLike") the specified Users and whatever sessions they may spawn (/*).

 

The only user allowed in that bucket is <<veeamsvcsec04>> it this example. To add other users, you would need add their user IDs in the "aws:userid" array.

 

Restricting access by source IP

Another way to further restrict access to a given bucket is to specify source IP(s).

Note that you must be extremely cautious when doing so, especially if your ISP does not provide you with a static IP.

You also need to enumerate all systems requiring access.

{

  "Version": "2012-10-17",

  "Statement": [

    {

      "Effect": "Deny",

      "Principal": {

        "AWS": "*"

      },

      "Action": "s3:*",

      "Resource": "arn:aws:s3:::orossisecurebucket05",

      "Condition": {

        "IpAddress": {

          "aws:SourceIp": "aa.b.ccc.dd"

        },

        "StringNotLike": {

          "aws:userid": [

            "10********25",

            "YB*****************6Y",

            "YB*****************6Y/*"

          ]

        }

      }

    }

  ]

}

 

When does bucket policies become overkill or overly complex to manage? It is really for you to decide!

Here are a few things to consider when adding SourceIP(s) for bucket access restrictions:

  • Remember that you can specify Direct Access or Gateway Servers
  • The mount server access is really important, and it is also possible to setup a helper appliance that alleviates transforms and health check operations.

Note: For public cloud providers supporting compute, typically the helper appliance would be a cloud instance … we will cover that topic in another post!
 

 

What about securing “on-premises” Object Stores?

In the example below, I am using Minio.

The concept is exactly the same:

  • Create a user
  • Attach a policy to the user that only grants access to the desired bucket.
  • Declare the Minio Object Storage repository using that User's access and secret keys

 

Note that for minio, the Resources wildcards need to be fully qualified with the service name. In this case: S3.

 

Summary

In this post we demonstrated the application of kb3151 recommendations to secure bucket access.

We also explored how to further tighten security by restricting bucket access with a bucket policy.

We showed that similar concept will apply to most S3 compatible targets.

In future blog posts we will discuss Gateway Server, Mount Server and Helper Appliance bucket access.

 

I hope you enjoyed this post and found it useful. Thank you for reading.


5 comments

Userlevel 7
Badge +20

Really like this article as it also covers Wasabi which I use in my homelab. Great post. 👍

Userlevel 7
Badge +6

Thank you for this…I posted a while back (linked below) about restricting access to buckets when hosting in Wasabi as a multi-tenant environment (such as Service Providers) but haven’t had much luck in restricting access but allowing the ListAllBuckets or ListAllMyBuckets command (don’t recall which it was), but I was only using user Policies and haven’t dipped into Bucket Policies.  This looks like this might work great or some combination thereof, as well as I’m happy to see the syntax for restricting by IP address.  This is immensely helpful since I don’t speak S3 very well.

 

 

Userlevel 7
Badge +14

Limiting access to certain IPs is a must in my opinion; I also suggested that as a feedback for the existing KB article. Thanks for posting this @olivier.rossi 

Userlevel 7
Badge +7

Nice post, I already use the user policy I ll take a closer look to the bucket policy. The limitation by IP could be a good choice  for structure with high security level !

Userlevel 7
Badge +8

Nice post, I already use the user policy I ll take a closer look to the bucket policy. The limitation by IP could be a good choice  for structure with high security level !

Secury by design FTW :)

Comment