Curiefense
Search…
Istio via Helm

Introduction

The instructions below show how to install Curiefense on a Kubernetes cluster, embedded in an Istio service mesh.
The following tasks, each described below in sequence, should be performed:
At the bottom of this page is a Reference section describing the charts and configuration variables.
During this process, you might find it helpful to read the descriptions (which include the purpose, secrets, and network/port details) of the services and their containers: Services and Container Images

Clone the Helm Repository

Clone the repository, if you have not already done so:
1
git clone https://github.com/curiefense/curiefense-helm.git
Copied!
This documentation assumes it has been cloned to ~/curiefense-helm.

Create a Kubernetes Cluster

Access to a Kubernetes cluster is required. Dynamic provisioning of persistent volumes must be supported. To set a StorageClass other than the default, change or override variable storage_class_name in ~/curiefense-helm/curiefense-helm/curiefense/values.yaml.
Below are instructions for several ways to achieve this:
  • Using minikube, Kubernetes 1.20.2 (dynamic provisioning is enabled by default)
  • Using Google GKE, Kubernetes 1.16.13 (RBAC and dynamic provisioning are enabled by default)
  • Using Amazon EKS, Kubernetes 1.18 (RBAC and dynamic provisioning are enabled by default)
You will need to install the following clients:

Option 1: Using minikube

This section describes the install for a single-node test setup (which is generally not useful for production).

Install minikube

Starting from a fresh ubuntu 21.04 VM:
1
minikube start --kubernetes-version=v1.20.2 --driver=docker --memory='8g' --cpus 6
2
minikube addons enable ingress
Copied!
Start a screen or tmux, and keep the following command running:
1
minikube tunnel
Copied!

Option 2: Using Google GKE

Create a cluster

1
gcloud container clusters create curiefense-gks --num-nodes=1 --machine-type=n1-standard-4 --cluster-version=1.20
2
gcloud container clusters get-credentials curiefense-gks
Copied!

Option 3: Using Amazon EKS

Create a cluster
1
eksctl create cluster --name curiefense-eks-2 --version 1.18 --nodes 1 --nodes-max 1 --managed --region us-east-2 --node-type m5.xlarge
Copied!

Reset State

If you have a clean machine where Curiefense has never been installed, skip this step and go to the next.
Otherwise, run these commands:
1
helm delete curiefense
2
helm delete -n curiefense curiefense
3
helm delete -n istio-system istio-ingress
4
helm delete -n istio-system istiod
5
helm delete -n istio-system istio-base
Copied!
Ensure that helm ls -a --all-namespaces outputs nothing.

Create Namespaces

Run the following commands:
1
kubectl create namespace curiefense
2
kubectl create namespace istio-system
Copied!

Setup storage

Curiefense's confserver exports configurations to object storage services, from which they are retrieved by curieproxy. Four backends are currently supported: AWS S3, Google Cloud Storage, minio (which can be self-hosted), or local storage (for single-node test deployments). To use curiefense, you must pick one, and define Secrets that allow interacting with the chosen storage service (except for local storage).

Option 1: AWS credentials

Encode the AWS S3 credentials that have r/w access to the S3 bucket. This yields a base64 string:
1
cat << EOF | base64 -w0
2
[default]
3
access_key = xxxxxxxxxxxxxxxxxxxx
4
secret_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
5
EOF
Copied!
Create a local file called s3cfg.yaml, with the contents below, replacing both occurrences of BASE64_S3CFG with the previously obtained base64 string:
1
---
2
apiVersion: v1
3
kind: Secret
4
data:
5
s3cfg: "BASE64_S3CFG"
6
metadata:
7
namespace: curiefense
8
labels:
9
app.kubernetes.io/name: s3cfg
10
name: s3cfg
11
type: Opaque
12
---
13
apiVersion: v1
14
kind: Secret
15
data:
16
s3cfg: "BASE64_S3CFG"
17
metadata:
18
namespace: istio-system
19
labels:
20
app.kubernetes.io/name: s3cfg
21
name: s3cfg
22
type: Opaque
Copied!
Deploy this secret to the cluster:
1
kubectl apply -f s3cfg.yaml
Copied!

Option 2: Google Cloud Storage credentials

Create a bucket, and a service account that has read/write access to the bucket. Obtain a private key for this account, which should look like this:
1
{
2
"type": "service_account",
3
"project_id": "PROJECT",
4
"private_key_id": "1234abcd1234abcd1234abcd1234abcd1234abcd",
5
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIE.....ABCD=\n-----END PRIVATE KEY-----\n",
6
"client_email": "[email protected]",
7
"client_id": "123412341234123412341",
8
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
9
"token_uri": "https://oauth2.googleapis.com/token",
10
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
11
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/....%40PROJECT.iam.gserviceaccount.com"
12
}
Copied!
Create a local file called gs.yaml, with the contents below, replacing both occurrences of BASE64_GS_PRIVATE_KEY with the previously obtained base64 string:
1
---
2
apiVersion: v1
3
kind: Secret
4
data:
5
gs.json: "BASE64_GS_PRIVATE_KEY"
6
metadata:
7
labels:
8
app.kubernetes.io/name: gs
9
name: gs
10
namespace: curiefense
11
type: Opaque
12
---
13
apiVersion: v1
14
kind: Secret
15
data:
16
gs.json: "BASE64_GS_PRIVATE_KEY"
17
metadata:
18
labels:
19
app.kubernetes.io/name: gs
20
name: gs
21
namespace: istio-system
22
type: Opaque
Copied!
Deploy this secret to the cluster:
1
kubectl apply -f gs.yaml
Copied!
Set the curieconf_manifest_url variables in curiefense-helm/curiefense/values.yaml and istio-helm/charts/gateways/istio-ingress/values.yaml to the following URL: gs://BUCKET_NAME/prod/manifest.json (replace BUCKET_NAME with the actual name of the bucket).
Also set the curiefense_bucket_type variables in the same values.yaml files to gs.

Option 3: minio credentials

Install a minio server, create a bucket and a Service Account that has read/write permissions to that bucket. The curiefense helm charts may be used to deploy such a minio server (single-node, default credentials, for testing).
Encode the minio credentials that have r/w access to the bucket. This yields a base64 string:
1
cat << EOF | base64 -w0
2
[default]
3
access_key = minioadmin
4
secret_key = minioadmin
5
EOF
Copied!
Create a local file called miniocfg.yaml, with the contents below, replacing both occurrences of BASE64_MINIOCFG with the previously obtained base64 string:
1
---
2
apiVersion: v1
3
kind: Secret
4
data:
5
miniocfg: "BASE64_MINIOCFG"
6
metadata:
7
labels:
8
app.kubernetes.io/name: miniocfg
9
name: miniocfg
10
namespace: curiefense
11
type: Opaque
12
---
13
apiVersion: v1
14
kind: Secret
15
data:
16
miniocfg: "BASE64_MINIOCFG"
17
metadata:
18
labels:
19
app.kubernetes.io/name: miniocfg
20
name: miniocfg
21
namespace: istio-system
22
type: Opaque
Copied!
Deploy this secret to the cluster:
1
kubectl apply -f miniocfg.yaml
Copied!
An example miniocfg.yaml file is provided in ~/curiefense-helm/curiefense-helm/example-miniocfg.yaml. It contains default credentials for minio, that will work with the minio installation that is provided in the curiefense helm charts.
Set the curieconf_manifest_url variables in curiefense-helm/curiefense/values.yaml and istio-helm/charts/gateways/istio-ingress/values.yaml to the following URL: minio://BUCKET_NAME/prod/manifest.json (replace BUCKET_NAME with the actual name of the bucket; use curiefense-minio-bucket with the minio installation that is provided in the curiefense helm charts).
Also set the curiefense_bucket_type variables in the same values.yaml files to minio.

Option 4: local bucket

For clusters where all istio ingress proxies as well as the confserver run on the same kubernetes node (typically test environments), a simple hostPath volume can be used. It is mounted to /bucket on the host machine, as well as in relevant containers.
Set the curieconf_manifest_url variables in curiefense-helm/curiefense/values.yaml and istio-helm/charts/gateways/istio-ingress/values.yaml to the following URL: file:///bucket/prod/manifest.json.
Also set the curiefense_bucket_type variables in the same values.yaml files to local-bucket.

Setup TLS for the UI server

Using TLS is optional. Follow these steps if only if you want to use TLS for communicating with the UI server, and you do not rely on istio to manage TLS.
The UIServer can be made to be reachable over HTTPS. To do that, two secrets have to be created to hold the TLS certificate and TLS key.
Create a local file called uiserver-tls.yaml, replacing TLS_CERT_BASE64 with the base64-encoded PEM X509 TLS certificate, and TLS_KEY_BASE64 with the base64-encoded TLS key.
1
---
2
apiVersion: v1
3
data:
4
uisslcrt: TLS_CERT_BASE64
5
kind: Secret
6
metadata:
7
labels:
8
app.kubernetes.io/name: uisslcrt
9
name: uisslcrt
10
namespace: curiefense
11
type: Opaque
12
---
13
apiVersion: v1
14
data:
15
uisslkey: TLS_KEY_BASE64
16
kind: Secret
17
metadata:
18
labels:
19
app.kubernetes.io/name: uisslkey
20
name: uisslkey
21
namespace: curiefense
22
type: Opaque
Copied!
Deploy this secret to the cluster:
1
kubectl apply -f uiserver-tls.yaml
Copied!
An example file with self-signed certificates is provided at ~/curiefense-helm/curiefense-helm/example-uiserver-tls.yaml.

Deploy Istio and Curiefense Images

Deploy the Istio service mesh:
1
cd ~/curiefense-helm/istio-helm
2
DOCKER_TAG=main ./deploy.sh
Copied!
And then the Curiefense components:
1
cd ~/curiefense-helm/curiefense-helm
2
DOCKER_TAG=main ./deploy.sh
Copied!

Deploy the (Sample) App

The application to be protected by Curiefense should now be deployed. These instructions are for the sample application bookinfo which is deployed in the default kubernetes namespace. Installation instructions are summarized below. More detailed instruction are available on the istio website.

Enable Istio injection

Add the istio-injection=enabled label that will make Istio automatically inject necessary sidecars to applications that are deployed in the default namespace.
1
kubectl label namespace default istio-injection=enabled
Copied!

Install the application

1
cd ~
2
wget 'https://github.com/istio/istio/releases/download/1.9.3/istio-1.9.3-linux-amd64.tar.gz'
3
tar -xf istio-1.9.3-linux-amd64.tar.gz
4
cd ~/istio-1.9.3/
5
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
6
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
Copied!

Test bookinfo

Check that bookinfo Pods are running (wait a bit if they are not):
1
kubectl get pod -l app=ratings
Copied!
Sample output example:
1
NAME READY STATUS RESTARTS AGE
2
ratings-v1-f745cf57b-cjg69 2/2 Running 0 79s
Copied!
Check that the application is working by querying its API directly without going through the Istio service mesh:
1
kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"
Copied!
Expected output:
1
<title>Simple Bookstore App</title>
Copied!

Test access to bookinfo through Istio

Set the GATEWAY_URL variable by following instructions on the Istio website.
Alternatively, with minikube, this command can be used instead:
1
export GATEWAY_URL=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'):80
Copied!
Check that bookinfo is reachable through Istio:
1
curl -sS http://$GATEWAY_URL/productpage | grep -o "<title>.*</title>"
Copied!
Expected output:
1
<title>Simple Bookstore App</title>
Copied!
If this error occurs: Could not resolve host: a6fdxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxx.us-west-2.elb.amazonaws.com ...the ELB is not ready yet. Wait and retry until it becomes available (typically a few minutes).

Check that logs stored in the Elasticsearch cluster

Run this query to access the protected website, bookinfo, and thus generate an access log entry:
1
curl http://$GATEWAY_URL/TEST_STRING
Copied!
Run this to ensure that the logs have been recorded:
1
kubectl exec -ti -n curiefense elasticsearch-0 -- curl http://127.0.0.1:9200/_search -H "Content-Type: application/json" -d '{"query": {"bool": {"must":{"match":{"request.attributes.uri": "/TEST_STRING"}}}}}'|grep -Eo '"uri":"/TEST_STRING"'
Copied!
Expected output:.
1
"uri":"/TEST_STRING"
Copied!

Expose Curiefense Services Using NodePorts

Run the following commands to expose Curiefense services through NodePorts. Warning: if the machine has a public IP, the services will be exposed on the Internet.
Start with this command:
1
kubectl apply -f ~/curiefense-helm/curiefense-helm/expose-services.yaml
Copied!
The following command can be used to determine the IP address of your cluster nodes on which services will be exposed:
1
kubectl get nodes -o wide
Copied!

For minikube only:

If you are using minikube, also run the following commands on the host in order to expose services on the Internet (ex. if you are running this on a cloud VM):
1
sudo iptables -t nat -A PREROUTING -p tcp --match multiport --dports 30000,30080,30300,30443 -j DNAT --to $(minikube ip)
2
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to $(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
3
sudo iptables -I FORWARD -p tcp --match multiport --dports 80,30000,30080,30300,30443,30444 -j ACCEPT
Copied!

For Amazon EKS only:

If you are using Amazon EKS, you will also need to allow inbound connections for port range 30000-30500 from your IP. Go to the EC2 page in the AWS console, select the EC2 instance for the cluster (named curiefense-eks-...-Node), select the "Security" pane, select the security group (named eks-cluster-sg-curiefense-eks-[0-9]+), then add the incoming rule.

Access Curiefense Services

The UIServer is now available on port 30080 over HTTP, and on port 30443 over HTTPS.
Grafana is now available on port 30300 over HTTP.
For the bookinfo sample app, the Book Review product page is now available on port 80 over HTTP, and on port 30444 over HTTPS. Try reaching http://IP/productpage.
The confserver is now available on port 30000 over HTTP: try reaching http://IP:30000/api/v1/.
For a full list of ports used by Curiefense containers, see the Reference page on services and containers.

Reference: Description of Helm Charts

Curiefense charts

Helm charts are divided as follows:
  • curiefense-admin - confserver, and UIServer.
  • curiefense-dashboards - Grafana and Prometheus.
  • curiefense-log - elasticsearch, filebeat, fluentd, kibana, logstash.
  • curiefense-proxy - curielogger and redis.

Chart configuration variables

Configuration variables in ~/curiefense-helm/curiefense-helm/curiefense/values.yaml can be modified or overridden to fit your deployment needs:
  • Variables in the images section define the Docker image names for each component. Override this if you want to host images on your own private registry.
  • storage_class_name is the StorageClass that is used for dynamic provisioning of Persistent Volumes. It defaults to null (default storage class, which works by default on EKS, GKE and minikube).
  • ..._storage_size variables define the size of persistent volumes. The defaults are fine for a test or small-scale deployment.
  • curieconf_manifest_url is the URL of the AWS S3 or Google Cloud Storage bucket that is used to synchronize configurations between the confserver and the Curiefense Istio sidecars.
  • docker_tag defines the image tag versions that should be used. deploy.sh will override this to deploy a version that matches the current working directory, unless the DOCKER_TAG environment variable is set.

Istio chart

Components added or modified by Curiefense are defined in ~/curiefense-helm/istio-helm/charts/gateways/istio-ingress/. Compared to the upstream Istio Kubernetes distribution, we add or change the following Pods:
  • An initContainer called curiesync-initialpull has been added. It synchronizes configuration before running Envoy.
  • A container called curiesync has been added. It periodically fetches the configuration that should be applied from an S3 or GS bucket (configurable with the curieconf_manifest_url variable), and makes it available to Envoy. This configuration is used by the LUA code that inspects traffic.
  • The container called istio-proxy now uses our custom Docker image, embedding our HTTP Filter, written in Lua.
  • An EnvoyFilter has been added. It forwards access logs to curielogger (see curiefense_access_logs_filter.yaml).
  • An EnvoyFilter has been added. It runs Curiefense's Lua code to inspect incoming traffic on the Ingress Gateways (see curiefense_lua_filter.yaml).

Chart configuration variables

Configuration variables in ~/curiefense-helm/istio-helm/charts/gateways/istio-ingress/values.yaml can be modified or overridden to fit your deployment needs:
  • gw_image defines the name of the image that contains our filtering code and modified Envoy binary.
  • curiesync_image defines the name of the image that contains scripts that synchronize local Envoy configuration with the AWS S3 bucket defined in curieconf_manifest_url.
  • curieconf_manifest_url is the URL of the AWS S3 bucket that is used to synchronize configurations between the confserver and the Curiefense Istio sidecars.
  • curiefense_namespace should contain the name of the namespace where Curiefense components defined in ~/curiefense-helm/curiefense-helm/ are running.
  • redis_host defines the hostname of the redis server that will be used by curieproxy. Defaults to the provided redis StatefulSet. Override this to replace the redis instance with one you supply.
  • initial_curieconf_pull defines whether a configuration should be pulled from the AWS S3 bucket before running Envoy (true), or if traffic should be allowed to flow with a default configuration until the next synchronization (typically every 10s).
Last modified 1mo ago