Cloud Native Show: Aftermath!
This week
Here are just a few we talked about:
We also mentioned the hardcore way to learn Kubernetes, doing it the hard way by Kelsey Hightower
What about AWS?
I also mentioned a surprise. I had recently stumbled upon LOCALSTACK, an AWS like environment for getting up to speed on your aws skills. There is a paid edition and a free one as well. We will leverage the free edition to setup a little lab leveraging Kubernetes (K3s).
Install K3s
curl -sfL https://get.k3s.io | sh -
sudo chmod 644 /etc/rancher/k3s/k3s.yaml
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
echo 'export KUBECONFIG=/etc/rancher/k3s/k3s.yaml' >> ~/.bashrc
#Test to make sure it is all working
kubectl get nodesDNS Local setup
To use Traefik ingress to access LocalStack,
add a DNS record, in my case:
localstack.k3s01.lab1.local A 192.168.0.205
Or if you already have k3s01.lab1.local configured, use a CNAME: localstack.k3s01.lab1.local CNAME k3s01.lab1.local For testing.
You can also add to /etc/hosts:
echo "192.168.0.205 localstack.k3s01.lab1.local" >> /etc/hosts
---
Next we will create the localstack deployment localstack.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: localstack-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Service
metadata:
name: localstack
labels:
app: localstack
spec:
ports:
- port: 4566
targetPort: 4566
protocol: TCP
selector:
app: localstack
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: localstack
labels:
app: localstack
spec:
replicas: 1
selector:
matchLabels:
app: localstack
template:
metadata:
labels:
app: localstack
spec:
volumes:
- name: localstack-data
persistentVolumeClaim:
claimName: localstack-data
containers:
- name: localstack
image: localstack/localstack:latest
ports:
- containerPort: 4566
volumeMounts:
- name: localstack-data
mountPath: /var/lib/localstack
env:
- name: SERVICES
value: "dynamodb,s3,sns,sqs,cloudformation,iam,logs"
- name: DATA_DIR
value: "/var/lib/localstack/data"
- name: PERSISTENCE
value: "1"
- name: LOCALSTACK_HOSTNAME
value: "localstack.k3s01.lab1.local"
- name: EDGE_PORT
value: "80"
- name: DEBUG
value: "1"
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: localstack-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: localstack.k3s01.lab1.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: localstack
port:
number: 4566kubectl apply -f localstack.yaml
Finally we need to add some environment settings:
export AWS_ACCESS_KEY_ID=test
export AWS_SECRET_ACCESS_KEY=test
export AWS_DEFAULT_REGION=us-east-1
export AWS_ENDPOINT_URL=http://localstack.k3s01.lab1.localIf you don’t want to do this manually every time then add to your .bashrc file
cat >> ~/.bashrc << 'EOF'
# LocalStack AWS CLI configuration
export AWS_ACCESS_KEY_ID=test
export AWS_SECRET_ACCESS_KEY=test
export AWS_DEFAULT_REGION=us-east-1
export AWS_ENDPOINT_URL=http://localstack.k3s01.lab1.local
EOF
source ~/.bashrc
Let the Fun begin!
Test to see if you can connect to your localstack and if services are running:
curl -s http://localstack.k3s01.lab1.local/_localstack/init | head -10
curl http://localstack.k3s01.lab1.local/_localstack/health
# check services:
curl http://localstack.k3s01.lab1.local/_localstack/health | jq '.services'
# For individual services:
curl http://localstack.k3s01.lab1.local/_localstack/health | jq '.services.dynamodb'
curl http://localstack.k3s01.lab1.local/_localstack/health | jq '.services.s3'
curl http://localstack.k3s01.lab1.local/_localstack/health | jq '.services.sns'First stop S3
What is S3?
Amazon S3 (Simple Storage Service) is object storage built to store and retrieve any amount of data from anywhere. S3 is commonly used for backup, archiving, data lakes, static websites, and content distribution.
# Create bucket
aws s3 mb s3://demo-bucket
# Create test files
echo "Hello LocalStack S3!" > test.txt
echo "File 1 content" > file1.txt
echo "File 2 content" > file2.txt
# Upload files
aws s3 cp test.txt s3://demo-bucket/
aws s3 cp file1.txt s3://demo-bucket/files/
aws s3 cp file2.txt s3://demo-bucket/files/
# List bucket contents
aws s3 ls s3://demo-bucket
aws s3 ls s3://demo-bucket/files/
# Download file
aws s3 cp s3://demo-bucket/test.txt downloaded.txt
cat downloaded.txt
# Create a folder structure
mkdir -p uploads/documents
echo "Document content" > uploads/documents/doc1.txt
echo "Document 2 content" > uploads/documents/doc2.txt
# Upload folder
aws s3 cp uploads/ s3://demo-bucket/uploads/ --recursive
# List with prefix
aws s3 ls s3://demo-bucket/uploads/documents/
# Set bucket versioning
aws s3api put-bucket-versioning \
--bucket demo-bucket \
--versioning-configuration Status=Enabled
# Upload new version of file
echo "Updated content" > test.txt
aws s3 cp test.txt s3://demo-bucket/
# List object versions
aws s3api list-object-versions --bucket demo-bucket --prefix test.txt
aws s3 ls s3://demo-bucket/files/
DynamoDB
What is DynamoDB?
DynamoDB is AWS's fully managed NoSQL database service. It provides fast, predictable performance with great scalability.
# Create a table and add data:
# Create users table
aws dynamodb create-table \
--table-name users \
--attribute-definitions AttributeName=userId,AttributeType=S \
--key-schema AttributeName=userId,KeyType=HASH \
--billing-mode PAY_PER_REQUEST
# Add a user
aws dynamodb put-item \
--table-name users \
--item '{"userId":{"S":"user1"},"name":{"S":"John Doe"},"email":{"S":"john@example.com"}}'
# Add another user
aws dynamodb put-item \
--table-name users \
--item '{"userId":{"S":"user2"},"name":{"S":"Jane Smith"},"email":{"S":"jane@example.com"}}'
# Add another user
aws dynamodb put-item \
--table-name users \
--item '{"userId":{"S":"user3"},"name":{"S":"Geoff Burke"},"email":{"S":"geoff.burke@supersite.com"}}'
# View all users
aws dynamodb scan --table-name users
# Get specific user
aws dynamodb get-item \
--table-name users \
--key '{"userId":{"S":"user3"}}'
SNS
What is SNS?
Amazon SNS (Simple Notification Service) is a pub/sub messaging service that broadcasts messages to multiple subscribers simultaneously. Think of it like a radio station - when you publish a message, it goes to ALL subscribers at once. This is perfect for notifications, alerts, and event-driven architectures where multiple systems need to react to the same event.
# Create SNS topics for different types of notifications
aws sns create-topic --name user-events
aws sns create-topic --name order-events
aws sns create-topic --name system-alerts
# Create multiple subscribers (different types)
aws sqs create-queue --queue-name email-service
aws sqs create-queue --queue-name sms-service
aws sqs create-queue --queue-name analytics-service
aws sqs create-queue --queue-name audit-service
# Set up fan-out pattern - one topic, multiple subscribers
aws sns subscribe \
--topic-arn arn:aws:sns:us-east-1:000000000000:user-events \
--protocol sqs \
--notification-endpoint arn:aws:sqs:us-east-1:000000000000:email-service
aws sns subscribe \
--topic-arn arn:aws:sns:us-east-1:000000000000:user-events \
--protocol sqs \
--notification-endpoint arn:aws:sqs:us-east-1:000000000000:sms-service
aws sns subscribe \
--topic-arn arn:aws:sns:us-east-1:000000000000:user-events \
--protocol sqs \
--notification-endpoint arn:aws:sqs:us-east-1:000000000000:analytics-service
# Publish one message - watch it go to ALL subscribers
aws sns publish \
--topic-arn arn:aws:sns:us-east-1:000000000000:user-events \
--message "User john@example.com signed up" \
--subject "New User Registration"
# Check that ALL services received the same message
echo "=== Email Service Received ==="
aws sqs receive-message --queue-url http://localstack.k3s01.lab1.local:4566/000000000000/email-service
echo "=== SMS Service Received ==="
aws sqs receive-message --queue-url http://localstack.k3s01.lab1.local:4566/000000000000/sms-service
echo "=== Analytics Service Received ==="
aws sqs receive-message --queue-url http://localstack.k3s01.lab1.local:4566/000000000000/analytics-service
# Publish structured message with attributes for filtering
aws sns publish \
--topic-arn arn:aws:sns:us-east-1:000000000000:user-events \
--message '{"event":"user_login","userId":"123","timestamp":"2025-08-03T10:00:00Z","location":"US"}' \
--message-attributes '{
"event_type":{"DataType":"String","StringValue":"login"},
"priority":{"DataType":"String","StringValue":"normal"},
"region":{"DataType":"String","StringValue":"US"}
}'
# Set up topic with subscription filters (only certain messages go to certain queues)
aws sns subscribe \
--topic-arn arn:aws:sns:us-east-1:000000000000:order-events \
--protocol sqs \
--notification-endpoint arn:aws:sqs:us-east-1:000000000000:email-service \
--attributes '{"FilterPolicy":"{\"priority\":[\"high\"]}"}'
aws sns subscribe \
--topic-arn arn:aws:sns:us-east-1:000000000000:order-events \
--protocol sqs \
--notification-endpoint arn:aws:sqs:us-east-1:000000000000:audit-service
# Publish high priority message (goes to email + audit)
aws sns publish \
--topic-arn arn:aws:sns:us-east-1:000000000000:order-events \
--message "Large order placed: $5000" \
--message-attributes '{"priority":{"DataType":"String","StringValue":"high"}}'
# Publish normal priority message (goes only to audit)
aws sns publish \
--topic-arn arn:aws:sns:us-east-1:000000000000:order-events \
--message "Regular order placed: $50" \
--message-attributes '{"priority":{"DataType":"String","StringValue":"normal"}}'
# Check filtered results
echo "=== Email Service (high priority only) ==="
aws sqs receive-message --queue-url http://localstack.k3s01.lab1.local:4566/000000000000/email-service
echo "=== Audit Service (all messages) ==="
aws sqs receive-message --queue-url http://localstack.k3s01.lab1.local:4566/000000000000/audit-serviceSQS
What is Advanced SQS Queue Management?
While SNS broadcasts messages instantly to everyone, SQS is about reliable, persistent message processing. SQS queues store messages until they're successfully processed, handle failures gracefully with dead letter queues, support message delays, and ensure messages aren't lost. This demo focuses on production-ready queue patterns for building resilient systems.
# Create a FIFO queue for ordered processing
aws sqs create-queue \
--queue-name order-processing.fifo \
--attributes '{
"FifoQueue":"true",
"ContentBasedDeduplication":"true"
}'
# Create dead letter queue for failed messages
aws sqs create-queue --queue-name failed-orders-dlq
# Create main processing queue with DLQ configuration
aws sqs create-queue \
--queue-name order-processing \
--attributes '{
"VisibilityTimeout":"60",
"MessageRetentionPeriod":"1209600",
"RedrivePolicy":"{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:000000000000:failed-orders-dlq\",\"maxReceiveCount\":3}"
}'
# Create batch processing queue
aws sqs create-queue \
--queue-name batch-reports \
--attributes '{"ReceiveMessageWaitTimeSeconds":"20"}'
# Send messages with different processing patterns
# 1. FIFO Queue - messages processed in exact order
aws sqs send-message \
--queue-url http://localstack.k3s01.lab1.local:4566/000000000000/order-processing.fifo \
--message-body '{"step":1,"action":"validate_payment","orderId":"ORD-001"}' \
--message-group-id "order-ORD-001"
aws sqs send-message \
--queue-url http://localstack.k3s01.lab1.local:4566/000000000000/order-processing.fifo \
--message-body '{"step":2,"action":"reserve_inventory","orderId":"ORD-001"}' \
--message-group-id "order-ORD-001"
aws sqs send-message \
--queue-url http://localstack.k3s01.lab1.local:4566/000000000000/order-processing.fifo \
--message-body '{"step":3,"action":"ship_order","orderId":"ORD-001"}' \
--message-group-id "order-ORD-001"
# 2. Delayed message processing
aws sqs send-message \
--queue-url http://localstack.k3s01.lab1.local:4566/000000000000/order-processing \
--message-body '{"task":"send_followup_email","orderId":"ORD-002","delay":"24_hours"}' \
--delay-seconds 30
# 3. Batch send multiple messages
aws sqs send-message-batch \
--queue-url http://localstack.k3s01.lab1.local:4566/000000000000/batch-reports \
--entries '[
{
"Id":"1",
"MessageBody":"{\"report\":\"daily_sales\",\"date\":\"2025-08-03\"}"
},
{
"Id":"2",
"MessageBody":"{\"report\":\"inventory_count\",\"date\":\"2025-08-03\"}"
},
{
"Id":"3",
"MessageBody":"{\"report\":\"customer_analytics\",\"date\":\"2025-08-03\"}"
}
]'
# 4. Process messages with proper error handling pattern
QUEUE_URL="http://localstack.k3s01.lab1.local:4566/000000000000/order-processing"
# Receive message
aws sqs receive-message \
--queue-url $QUEUE_URL \
--max-number-of-messages 1 \
--wait-time-seconds 5
# Simulate processing failure - message goes back to queue after visibility timeout
# (In real processing, you'd delete the message only after successful processing)
# Send a message that will eventually go to DLQ
aws sqs send-message \
--queue-url $QUEUE_URL \
--message-body '{"task":"problematic_task","orderId":"ORD-BAD","will":"fail"}'
# Simulate receiving and failing to process 3 times
for i in {1..4}; do
echo "=== Attempt $i ==="
RECEIPT=$(aws sqs receive-message --queue-url $QUEUE_URL --query 'Messages[0].ReceiptHandle' --output text)
if [ "$RECEIPT" != "None" ] && [ "$RECEIPT" != "" ]; then
echo "Received message, simulating processing failure..."
# Don't delete the message - simulates processing failure
sleep 2
else
echo "No message received (may have moved to DLQ)"
fi
done
# Check if message moved to dead letter queue
echo "=== Checking Dead Letter Queue ==="
aws sqs receive-message \
--queue-url http://localstack.k3s01.lab1.local:4566/000000000000/failed-orders-dlq
# 5. Long polling for efficient message retrieval
echo "=== Long Polling Example ==="
aws sqs receive-message \
--queue-url http://localstack.k3s01.lab1.local:4566/000000000000/batch-reports \
--max-number-of-messages 10 \
--wait-time-seconds 20
# 6. Check queue attributes and metrics
echo "=== Queue Metrics ==="
aws sqs get-queue-attributes \
--queue-url $QUEUE_URL \
--attribute-names All
echo "=== FIFO Queue Metrics ==="
aws sqs get-queue-attributes \
--queue-url http://localstack.k3s01.lab1.local:4566/000000000000/order-processing.fifo \
--attribute-names ApproximateNumberOfMessages ApproximateNumberOfMessagesNotVisible
# 7. Purge queue (careful in production!)
aws sqs purge-queue --queue-url $QUEUE_URL
I think that was enough for today. I must admit this post took me a long time to write as I am quite new to the subject. In the next post I will try to tackle cloud formation and perhaps try to combine a bunch of AWS local stack services to work together.
While this local demo environment is in now way able to replace the real deal, nevertheless it can be used as a great learning tool.
