Istio via Helm

The instructions below show how to install Curiefense on a Kubernetes cluster, embedded in an Istio service mesh. It assumes that the instructions described in First Tasks have been completed successfully.

The following tasks, each described below in sequence, should be performed:

At the bottom of this page is a Reference section describing the charts and configuration variables.

Setup Synchronization

An AWS S3 bucket must be available to synchronize configurations between the confserver and the Curiefense Istio sidecars. The following Curiefense variables must be set:

  • In deploy/istio-helm/chart/values.yaml:

    • Setcurieconf_manifest_url to the bucket URL.

  • In deploy/curiefense-helm/curiefense/values.yaml:

    • Set curieconf_manifest_url to the bucket URL.

Create a Kubernetes Cluster Running Helm

Access to a Kubernetes cluster running Helm v2 is required. Dynamic provisioning of persistent volumes must be supported. To set a StorageClass other than the default, change or override variable storage_class_name in deploy/curiefense-helm/curiefense/values.yaml.

Below are instructions for several ways to achieve this:

  • Using minikube, Kubernetes 1.14.9 and Helm v2.13.1 (dynamic provisioning is enabled by default)

  • Using Google GKE, Kubernetes 1.16.13 (with RBAC) and Helm v2.16.7 (dynamic provisioning is enabled by default)

  • Using Amazon EKS, Kubernetes 1.18 (with RBAC) and Helm v2.16.7 (dynamic provisioning is enabled by default)

Option 1: Using minikube

This section describes the install for a single-node test setup (which is generally not useful for production).

Install minikube

Starting from a fresh ubuntu 20.04 VM:

minikube start --kubernetes-version=v1.14.9 --driver=docker --memory='8g' --cpus 4
eval $(minikube docker-env)
minikube addons enable ingress

Start a screen or tmux, and keep the following command running:

minikube tunnel

Install Helm v2.13.1

Run the following commands:

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh -v v2.13.1
# socat is required but not installed by get_helm.sh
apt install socat

(Alternately, Helm can be manually downloaded as a binary release, as explained at https://helm.sh/docs/intro/install/. If you choose to do this, ensure that you obtain v2.13.1.)

Now install Helm to the Kubernetes cluster:

helm init

Option 2: Using Google GKE

This option uses a more recent Kubernetes, with RBAC enabled.

Install kubectl

Follow instructions at https://kubernetes.io/docs/tasks/tools/install-kubectl/. Use version 1.16.13.

Create a cluster

gcloud container clusters create curiefense-gks --num-nodes=1 --machine-type=n1-standard-4
gcloud container clusters get-credentials curiefense-gks

Install Helm v2.16.7

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh -v v2.16.7

(Alternately, Helm can be manually downloaded as a binary release, as explained at https://helm.sh/docs/intro/install/. If you choose to do this, ensure that you obtain v2.16.7.)

Now we must define RBAC authorizations. Helm needs to be able to deploy applications to both the curiefense and istio-system namespaces.

To do that, we provide an example configuration, which installs Tiller in the kube-system namespaces, and grants it cluster-admin permissions.

kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

Finally, install Helm to the Kubernetes cluster:

helm init --service-account tiller

Option 3: Using Amazon EKS

This option uses a more recent Kubernetes, with RBAC enabled.

Install kubectl

Follow instructions at https://kubernetes.io/docs/tasks/tools/install-kubectl/. Use version 1.18.

Create a cluster

eksctl create cluster --name curiefense-eks-2 --version 1.18 --nodes 1 --nodes-max 1 --managed --region us-east-2 --node-type m5.xlarge

Install Helm v2.16.7

Follow all the "Install Helm v2.16.7" instructions shown above in the Google GKE section.

Reset State

If you have a clean machine where Curiefense has never been installed, skip this step and go to the next.

Otherwise, run these commands:

kubectl delete namespaces bookinfo
helm delete --purge curiefense
helm delete --purge istio-cf

Ensure that helm ls -a outputs nothing.

Create Namespaces

Run the following commands:

kubectl create namespace curiefense
kubectl create namespace istio-system

Setup Secrets

AWS credentials

Encode the AWS S3 credentials that have r/w access to the S3 bucket. This yields a base64 string:

cat << EOF | base64 -w0
[default]
access_key = xxxxxxxxxxxxxxxxxxxx
secret_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
EOF

Create a local file called s3cfg.yaml, with the contents below, replacing both occurrences of BASE64_S3CFG with the previously obtained base64 string:

---
apiVersion: v1
kind: Secret
data:
  s3cfg: "BASE64_S3CFG"
metadata:
  namespace: curiefense
  labels:
    app.kubernetes.io/name: s3cfg
  name: s3cfg
type: Opaque
---
apiVersion: v1
kind: Secret
data:
  s3cfg: "BASE64_S3CFG"
metadata:
  namespace: istio-system
  labels:
    app.kubernetes.io/name: s3cfg
  name: s3cfg
type: Opaque

Deploy this secret to the cluster:

kubectl apply -f s3cfg.yaml

Setup TLS

Using TLS is optional.

The UIServer can be made to be reachable over HTTPS. To do that, two secrets have to be created to hold the TLS certificate and TLS key.

Create a local file called uiserver-tls.yaml, replacing TLS_CERT_BASE64 with the base64-encoded PEM X509 TLS certificate, and TLS_KEY_BASE64 with the base64-encoded TLS key.

---
apiVersion: v1
data:
  uisslcrt: TLS_CERT_BASE64
kind: Secret
metadata:
  labels:
    app.kubernetes.io/name: uisslcrt
  name: uisslcrt
  namespace: curiefense
type: Opaque
---
apiVersion: v1
data:
  uisslkey: TLS_KEY_BASE64
kind: Secret
metadata:
  labels:
    app.kubernetes.io/name: uisslkey
  name: uisslkey
  namespace: curiefense
type: Opaque

Deploy this secret to the cluster:

kubectl apply -f uiserver-tls.yaml

An example file with self-signed certificates is provided at deploy/curiefense-helm/example-uiserver-tls.yaml.

When running ./deploy.sh in the next step, add this argument to enable TLS on the UIServer:

-f curiefense/uiserver-enable-tls.yaml

Deploy Istio and Curiefense Images

Deploy the Istio service mesh:

cd ~/curiefense/deploy/istio-helm 
DOCKER_TAG=main ./deploy.sh

And then the Curiefense components:

cd ~/curiefense/deploy/curiefense-helm
DOCKER_TAG=main ./deploy.sh

Deploy the (Sample) App

The application to be protected by Curiefense should now be deployed. These instructions are for the sample application bookinfo.

Create the namespace

Create the Kubernetes namespace, and add the istio-injection=enabled label that will make Istio automatically inject necessary sidecars to applications that are deployed in this namespace.

kubectl create namespace bookinfo
kubectl label namespace bookinfo istio-injection=enabled

Install the application

git clone https://github.com/istio/istio/ -b 1.5.10
kubectl apply -n bookinfo -f istio/samples/bookinfo/platform/kube/bookinfo.yaml
kubectl apply -n bookinfo -f istio/samples/bookinfo/networking/bookinfo-gateway.yaml

Test bookinfo

Check that bookinfo Pods are running (wait a bit if they are not):

kubectl get -n bookinfo pod -l app=ratings

Sample output example:

NAME                         READY   STATUS    RESTARTS   AGE
ratings-v1-f745cf57b-cjg69   2/2     Running   0          79s

Check that the application is working by querying its API directly without going through the Istio service mesh:

kubectl exec -n bookinfo -it "$(kubectl get -n bookinfo pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep "<title>"

Expected output:

<title>Simple Bookstore App</title>

Test access to bookinfo through Istio

curl -sS http://$(kubectl -n istio-system get svc istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')/productpage|grep "<title>"

(Replace "ip" with "hostname" if running in an environment where the LoadBalancer yields a FQDN, as is the case with Amazon's ELB.)

Expected output:

<title>Simple Bookstore App</title>

If this error occurs: Could not resolve host: a6fdxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxx.us-west-2.elb.amazonaws.com ...the ELB is not ready yet. Wait and retry until it becomes available (typically a few minutes).

Check that logs reach the accesslog UI

Run this query

curl http://$(kubectl -n istio-system get svc istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')/TEST_STRING

(Replace "ip" with "hostname" if running in an environment where the LoadBalancer yields a FQDN, as is the case with Amazon's ELB.)

Run this to ensure that the logs have been recorded and are reachable from the UI server:

kubectl exec -ti -n curiefense elasticsearch-0 -- curl http://127.0.0.1:9200/_search -H "Content-Type: application/json" -d '{"query": {"match": {"request.attributes.path": "/TEST_STRING"}}}'

Check that a result is return, and that it does contains TEST_STRING.

Expose Curiefense Services using NodePorts

Run the following commands to expose Curiefense services through NodePorts. Warning: if the machine has a public IP, the services will be exposed on the Internet.

Start with this command:

kubectl apply -f ~/curiefense/deploy/curiefense-helm/expose-services.yaml

The following command can be used to determine the IP address of your cluster nodes on which services will be exposed:

kubectl get nodes -o wide

For minikube only:

If you are using minikube, also run the following commands on the host in order to expose services on the Internet:

sudo iptables -t nat -A PREROUTING -p tcp --match multiport --dports 30000,30080,30081,30200,30300,30443,30601 -j DNAT --to 172.17.0.2
sudo iptables -I FORWARD -p tcp --match multiport --dports 30000,30080,30081,30200,30300,30443,30444,30601 -j  ACCEPT

For Amazon EKS only:

If you are using Amazon EKS, you will also need to allow inbound connections for port range 30000-30700 from your IP. Go to the EC2 page in the AWS console, select the EC2 instance for the cluster (named curiefense-eks-...-Node), select the "Security" pane, select the security group (named eks-cluster-sg-curiefense-eks-[0-9]+), then add the incoming rule.

Access Curiefense Services

The UIServer is now available on port 30080 over HTTP, and on port 30443 over HTTPS.

Grafana is now available on port 30300 over HTTP.

For the bookinfo sample app, the Book Review product page is now available on port 30081 over HTTP, and on port 30444 over HTTPS (if you chose to enable TLS).

The confserver is now available on port 30000 over HTTP.

Kibana is now available on port 30601 over HTTP.

Elasticsearch is now available on port 30200 over HTTP.

For a full list of ports used by Curiefense containers, see the Reference page on services and containers.

Reference: Description of Helm Charts

Curiefense charts

Helm charts are divided as follows:

  • curiefense-admin - confserver and UIServer.

  • curiefense-dashboards - Grafana and Prometheus.

  • curiefense-log - log storage: elasticsearch (default); log forwarders for elasticsearch: logstash (default), fluentd; log display interface: kibana (default)

  • curiefense-proxy - curielogger, curiesync and redis (used for synchronizationj.

Chart configuration variables

Configuration variables in deploy/curiefense-helm/curiefense/values.yaml can be modified or overridden to fit your deployment needs:

  • Variables in the images section define the Docker image names for each component. Override this if you want to host images on your own private registry.

  • storage.storage_class_name is the StorageClass that is used for dynamic provisioning of Persistent Volumes. It defaults to null (default storage class, which works by default on EKS, GKE and minikube).

  • storage.*_storage_size variables define the size of persistent volumes. The defaults are fine for a test or small-scale deployment.

  • settings.curieconf_manifest_url is the URL of the AWS S3 bucket that is used to synchronize configurations between the confserver and the Curiefense Istio sidecars.

  • settings.curiefense_es_forwarder defines whether logs are forwarded to elasticsearch using fluentd or logstash (default). Has no effect if settings.curiefense_logdb_type is set to elasticsearch.

  • settings.curiefense_es_hosts is the hostname for the elasticsearch cluster. Changing it is required only if the elasticsearch cluster supplied by this chart is not used, and replaced with an externally-managed cluster.

  • settings.curiefense_logstash_url is the url of the logstash server. Changing it is required only if the logstash instance supplied by this chart is not used, and replaced with an externally-managed instance.

  • settings.curiefense_fluentd_url is the url of the fluentd server. Changing it is required only if the fluentd instance supplied by this chart is not used, and replaced with an externally-managed instance.

  • settings.curiefense_kibana_url is the url of the kibana server. Changing it is required only if the kibana instance supplied by this chart is not used, and replaced with an externally-managed instance.

  • settings.curiefense_bucket_type is the type of cloud bucket that is used to transfer configurations from confserver to envoy proxies (supported values: s3 or gs).

  • settings.curiefense_es_index_name is the name of the elasticsearch index where logs are stored.

  • settings.docker_tag defines the image tag versions that should be used. deploy.sh will override this to deploy a version that matches the current working directory, unless the DOCKER_TAG environment variable is set.

  • settings.redis_port is the port on which redis listens. This value must be set identically in the Istio chart's values.yaml.

  • settings.uiserver_enable_tls is a boolean that defines whether TLS is enabled on the UI server. If it is enabled, then a certificate and key must have been provisioned (see above).

  • Variables in the requests define default CPU requirements for pods.

  • Variables in the enable allow disabling parts of a deployment, which can be supplied outside of this chart (ex. kibana, logstash, fluentd, elasticsearch, prometheus...).

Istio chart

Components added or modified by Curiefense are defined in deploy/istio-helm/chart/charts/gateways. Compared to the upstream Istio Kubernetes distribution, we add or change the following Pods:

  • An initContainer called curiesync-initialpull has been added. It synchronizes configuration before running Envoy.

  • A container called curiesync has been added. It periodically fetches the configuration that should be applied from an S3 bucket (configurable with the curieconf_manifest_url variable), and makes it available to Envoy. This configuration is used by the LUA code that inspects traffic.

  • The container called istio-proxy now uses our custom Docker image, embedding our HTTP Filter, written in Lua.

  • An EnvoyFilter has been added. It forwards access logs to curielogger (see curiefense_access_logs_filter.yaml).

  • An EnvoyFilter has been added. It runs Curiefense's Lua code to inspect incoming traffic on the Ingress Gateways (see curiefense_lua_filter.yaml).

Chart configuration variables

Configuration variables in deploy/istio-helm/chart/values.yaml can be modified or overridden to fit your deployment needs:

  • gw_image defines the name of the image that contains our filtering code and modified Envoy binary.

  • curiesync_image defines the name of the image that contains scripts that synchronize local Envoy configuration with the AWS S3 bucket defined in curieconf_manifest_url.

  • curieconf_manifest_url is the URL of the AWS S3 or Google Storage bucket that is used to synchronize configurations between the confserver and the Curiefense Istio sidecars.

  • curiefense_namespace is the name of the namespace where Curiefense components defined in deploy/curiefense-helm/ are running.

  • curiefense_bucket_type is the type of cloud bucket that is used to transfer configurations from confserver to envoy proxies (supported values: s3 or gs).

  • redis_host defines the hostname of the redis server that will be used by curieproxy. Defaults to the provided redis StatefulSet. Override this to replace the redis instance with one you supply.

  • redis_port defines the port of the redis server that will be used by curieproxy. Defaults to the provided redis StatefulSet. Override this to replace the redis instance with one you supply.

  • initial_curieconf_pull defines whether a configuration should be pulled from the AWS S3 bucket before running Envoy (true), or if traffic should be allowed to flow with a default configuration until the next synchronization (typically every 10s).

Last updated