Most S3 offerings allow to leverage customer provided keys to encrypt data uploaded there (SSE-C).
By calling the x-amz-server-side-encryption-customer-algorithm an encryption process inside your buckets can be initiated. Though also AES-256 will be used as the encryption standard, mind that this is not the same encrytion process that we should always have in place when targeting an S3 bucket from your backups. Encryption with VBR is already done on the job level.
An attacker could thus encrypt your backup data another time - with a key he defines.
Problem is: this “customer provided key” is only processed during encryption, but not stored inside the S3 stack. Only an HMAC is stored - which is not sufficient for a decryption.
There already is a ransomware by the name of Codefinger which is leveraging that mechanism. An attacker with enough time can even circumvent immutability via object-lock by just waiting until everything in locked state is also already encrypted.
So, what can be taken out of that to mitigate the risks to our repos?
- Don’t expose access keys and secrets for your buckets. Quite trivial, but very important.
- Limit the access to the buckets used for your backups to only the public IPs of your S3 gateway servers (ACL). See example for Wasabi.
- Restrict the usage of SSE-C. See example for AWS:
{
"Version": "2024-10-17",
"Id": "PutObjectPolicy",
"Statement": e
{
"Sid": "RestrictSSECObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-s3-demo-bucket/*",
"Condition": {
"Null": {
"s3:x-amz-server-side-encryption-customer-algorithm": "false"
}
}
}
]
}
A feature request to Veeam would be to validate those settings or even be able to determine malicious activity inside your buckets.
For the time being: heads up!