Many thanks to Márk Sági-Kazár for the instructions below.
Prerequisites
Kubernetes cluster (I have one running on AWS)
Note: the Kubernetes cluster should be large enough to run all dependencies (including ElasticSearch).
Prepare a bucket
On AWS, you can create a bucket using the AWS CLI:
# Use your bucket name
aws s3 mb s3://my-curiefense-test
Create a new user for Curiefense:
aws iam create-user --user-name my-curiefense-test
Create new credentials for the user:
aws iam create-access-key --user-name my-curiefense-test
Take note of the AccessKeyId
and SecretAccessKey
fields.
Create a policy.json
file with the following content:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Sid0",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-curiefense-test/*"
},
{
"Sid": "Sid1",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::my-curiefense-test"
},
{
"Sid": "Sid2",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": "*"
}
]
}
Attach the policy to the user:
aws iam put-user-policy --user-name my-curiefense-test --policy-name CuriefenseS3Bucket --policy-document file://policy.json
Note: Do NOT use the above in production. Use IAM roles for service accounts instead.
Create a Curiefense namespace
Due to some limitations (syslog config in nginx image, etc) every component have to be installed in the same namespace.
Create a curiefense
namespace:
kubectl create namespace curiefense
Install the Ingress Controller
Create a curiesync-secret.yaml
file with the following content:
apiVersion: v1
kind: Secret
metadata:
name: curiesync
data:
curiesync.env: |
export CURIE_BUCKET_LINK=s3://my-curiefense-test/prod/manifest.json
export CURIE_S3_ACCESS_KEY=YOUR_ACCESS_KEY_ID
export CURIE_S3_SECRET_KEY=YOUR_SECRET_ACCESS_KEY
Create the secret:
kubectl -n curiefense apply -f curiesync-secret.yaml
Create a values.ingress.yaml
file with the following content:
controller:
image:
repository: curiefense/curiefense-nginx-ingress
tag: e2bd0d43d9ecd7c6544a8457cf74ef1df85547c2
volumes:
- name: curiesync
secret:
secretName: curiesync
volumeMounts:
- name: curiesync
mountPath: /etc/curiefense
If you don't already have the nginx-stable
repo added to Helm, run the following commands:
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update
Install the ingress controller:
# This particular chart version installs the latest supported curiefense nginx ingress image
helm -n curiefense install --version 0.9.3 -f values.ingress.yaml ingress nginx-stable/nginx-ingress
Install Curiefense
Create a s3cfg-secret.yaml
file with the following content:
apiVersion: v1
kind: Secret
metadata:
name: s3cfg
type: Opaque
stringData:
s3cfg: |
[default]
access_key = YOUR_ACCESS_KEY_ID
secret_key = YOUR_SECRET_ACCESS_KEY
Create the secret:
kubectl -n curiefense apply -f s3cfg-secret.yaml
Create a values.curiefense.yaml
file with the following content:
global:
proxy:
frontend: "nginx"
settings:
curieconf_manifest_url: "s3://my-curiefense-test/prod/manifest.json"
Clone the Curiefense Helm repository:
git clone git@github.com:curiefense/curiefense-helm.git
Install Curiefense:
helm install -n curiefense -f values.curiefense.yaml curiefense ./curiefense-helm/curiefense-helm/curiefense
Open a port forward to the UI server and start hacking:
kubectl -n curiefense port-forward deploy/uiserver 8080:80
open http://localhost:8080
Make some changes then head to the "Publish Changes" section and click "Publish configuration".
Install echoserver (optional)
It's time to put Curiefense to the test.
Create an echoserver.yaml
file with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echoserver
labels:
app.kubernetes.io/part-of: "curiefense"
spec:
replicas: 1
selector:
matchLabels:
app: echoserver
template:
metadata:
labels:
app: echoserver
app.kubernetes.io/part-of: "curiefense"
spec:
containers:
- image: gcr.io/google_containers/echoserver:1.10
imagePullPolicy: IfNotPresent
name: echoserver
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: echoserver
labels:
app: echoserver
service: echoserver
app.kubernetes.io/part-of: "curiefense"
spec:
ports:
- port: 8080
name: http
selector:
app: echoserver
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echoserver
labels:
app.kubernetes.io/part-of: "curiefense"
annotations:
nginx.org/location-snippets: |
access_by_lua_block {
local session = require "lua.session_nginx"
session.inspect(ngx)
}
log_by_lua_block {
local session = require "lua.session_nginx"
session.log(ngx)
}
spec:
ingressClassName: nginx
rules:
- host: YOUR_HOST
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echoserver
port:
number: 8080
Based on how you configured the ingress controller and DNS, you should be able to access the echoserver at the host of your choosing.
Cleanup
Be careful, these commands are destructive!
Once you are done, you can cleanup the created resources from the cluster with the following commands:
kubectl delete -f echoserver.yaml
kubectl delete namespace curiefense
To delete all AWS resource:
aws iam delete-user-policy --user-name my-curiefense-test --policy-name CuriefenseS3Bucket
aws iam delete-access-key --user-name my-curiefense-test --access-key-id YOUR_ACCESS_KEY_ID
aws iam delete-user --user-name my-curiefense-test
aws s3 rb s3://my-curiefense-test --force
Notes
Curiefense nginx ingress image should be updated to the latest version (to support the latest Ingress API)
Ingress needs to be deployed in the same namespace at the moment (in order to push logs to curielogger)
ElasticSearch doesn't work out of the box