The instructions below show how to install Curiefense on a Kubernetes cluster, embedded in an Istio service mesh.
The following tasks, each described below in sequence, should be performed:
At the bottom of this page is a Reference section describing the charts and configuration variables.
During this process, you might find it helpful to read the descriptions (which include the purpose, secrets, and network/port details) of the services and their containers: Services and Container Images
Clone the repository, if you have not already done so:
This documentation assumes it has been cloned to ~/curiefense-helm
.
An AWS S3 bucket must be available to synchronize configurations between the confserver
and the Curiefense Istio sidecars. The following Curiefense variables must be set:
In ~/curiefense-helm/istio-helm/charts/gateways/istio-ingress/values.yaml
:
Setcurieconf_manifest_url
to the bucket URL.
In ~/curiefense-helm/curiefense-helm/curiefense/values.yaml
:
Set curieconf_manifest_url
to the bucket URL.
Access to a Kubernetes cluster is required. Dynamic provisioning of persistent volumes must be supported. To set a StorageClass other than the default, change or override variable storage_class_name
in ~/curiefense-helm/curiefense-helm/curiefense/values.yaml
.
Below are instructions for several ways to achieve this:
Using minikube, Kubernetes 1.20.2 (dynamic provisioning is enabled by default)
Using Google GKE, Kubernetes 1.16.13 (RBAC and dynamic provisioning are enabled by default)
Using Amazon EKS, Kubernetes 1.18 (RBAC and dynamic provisioning are enabled by default)
You will need to install the following clients:
Install kubectl (https://kubernetes.io/docs/tasks/tools/install-kubectl/) -- use the same version as your cluster.
Install Helm v3 (https://helm.sh/docs/intro/install/)
This section describes the install for a single-node test setup (which is generally not useful for production).
Starting from a fresh ubuntu 21.04 VM:
Install docker (https://docs.docker.com/engine/install/ubuntu/), and allow your user to interact with docker with sudo usermod -aG docker $USER && newgrp docker
Install minikube (https://minikube.sigs.k8s.io/docs/start/)
Start a screen
or tmux
, and keep the following command running:
Create a cluster
If you have a clean machine where Curiefense has never been installed, skip this step and go to the next.
Otherwise, run these commands:
Ensure that helm ls -a --all-namespaces
outputs nothing.
Run the following commands:
Encode the AWS S3 credentials that have r/w access to the S3 bucket. This yields a base64 string:
Create a local file called s3cfg.yaml
, with the contents below, replacing both occurrences of BASE64_S3CFG
with the previously obtained base64 string:
Deploy this secret to the cluster:
Create a bucket, and a service account that has read/write access to the bucket. Obtain a private key for this account, which should look like this:
Create a local file called gs.yaml
, with the contents below, replacing both occurrences of BASE64_GS_PRIVATE_KEY
with the previously obtained base64 string:
Deploy this secret to the cluster:
Set the curieconf_manifest_url
variables in curiefense-helm/curiefense/values.yaml
and istio-helm/charts/gateways/istio-ingress/values.yaml
to the following URL: gs://BUCKET_NAME/prod/manifest.json
(replace BUCKET_NAME with the actual name of the bucket).
Also set the curiefense_bucket_type
variables in the same values.yaml files to gs
.
Using TLS is optional. Follow these steps if only if you want to use TLS for communicating with the UI server, and you do not rely on istio to manage TLS.
The UIServer can be made to be reachable over HTTPS. To do that, two secrets have to be created to hold the TLS certificate and TLS key.
Create a local file called uiserver-tls.yaml
, replacing TLS_CERT_BASE64
with the base64-encoded PEM X509 TLS certificate, and TLS_KEY_BASE64
with the base64-encoded TLS key.
Deploy this secret to the cluster:
An example file with self-signed certificates is provided at ~/curiefense-helm/curiefense-helm/example-uiserver-tls.yaml
.
Deploy the Istio service mesh:
And then the Curiefense components:
The application to be protected by Curiefense should now be deployed. These instructions are for the sample application bookinfo
which is deployed in the default
kubernetes namespace. Installation instructions are summarized below. More detailed instruction are available on the istio website.
Add the istio-injection=enabled
label that will make Istio automatically inject necessary sidecars to applications that are deployed in the default
namespace.
Check that bookinfo
Pods are running (wait a bit if they are not):
Sample output example:
Check that the application is working by querying its API directly without going through the Istio service mesh:
Expected output:
Set the GATEWAY_URL variable by following instructions on the Istio website.
Alternatively, with minikube, this command can be used instead:
Check that bookinfo is reachable through Istio:
Expected output:
If this error occurs: Could not resolve host: a6fdxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxx.us-west-2.elb.amazonaws.com
...the ELB is not ready yet. Wait and retry until it becomes available (typically a few minutes).
Run this query to access the protected website, bookinfo, and thus generate an access log entry:
Run this to ensure that the logs have been recorded:
Expected output:.
Run the following commands to expose Curiefense services through NodePorts. Warning: if the machine has a public IP, the services will be exposed on the Internet.
Start with this command:
The following command can be used to determine the IP address of your cluster nodes on which services will be exposed:
If you are using minikube, also run the following commands on the host in order to expose services on the Internet (ex. if you are running this on a cloud VM):
If you are using Amazon EKS, you will also need to allow inbound connections for port range 30000-30500 from your IP. Go to the EC2 page in the AWS console, select the EC2 instance for the cluster (named curiefense-eks-...-Node
), select the "Security" pane, select the security group (named eks-cluster-sg-curiefense-eks-[0-9]+
), then add the incoming rule.
The UIServer is now available on port 30080 over HTTP, and on port 30443 over HTTPS.
Grafana is now available on port 30300 over HTTP.
For the bookinfo
sample app, the Book Review product page is now available on port 80 over HTTP, and on port 30444 over HTTPS. Try reaching http://IP/productpage
.
The confserver is now available on port 30000 over HTTP: try reaching http://IP:30000/api/v1/
.
For a full list of ports used by Curiefense containers, see the Reference page on services and containers.
Helm charts are divided as follows:
curiefense-admin
- confserver, and UIServer.
curiefense-dashboards
- Grafana and Prometheus.
curiefense-log
- elasticsearch, filebeat, fluentd, kibana, logstash.
curiefense-proxy
- curielogger and redis.
Configuration variables in ~/curiefense-helm/curiefense-helm/curiefense/values.yaml
can be modified or overridden to fit your deployment needs:
Variables in the images
section define the Docker image names for each component. Override this if you want to host images on your own private registry.
storage_class_name
is the StorageClass that is used for dynamic provisioning of Persistent Volumes. It defaults to null
(default storage class, which works by default on EKS, GKE and minikube).
..._storage_size
variables define the size of persistent volumes. The defaults are fine for a test or small-scale deployment.
curieconf_manifest_url
is the URL of the AWS S3 or Google Cloud Storage bucket that is used to synchronize configurations between the confserver
and the Curiefense Istio sidecars.
docker_tag
defines the image tag versions that should be used. deploy.sh
will override this to deploy a version that matches the current working directory, unless the DOCKER_TAG
environment variable is set.
Components added or modified by Curiefense are defined in ~/curiefense-helm/istio-helm/charts/gateways/istio-ingress/
. Compared to the upstream Istio Kubernetes distribution, we add or change the following Pods:
An initContainer
called curiesync-initialpull
has been added. It synchronizes configuration before running Envoy.
A container called curiesync
has been added. It periodically fetches the configuration that should be applied from an S3 or GS bucket (configurable with the curieconf_manifest_url
variable), and makes it available to Envoy. This configuration is used by the LUA code that inspects traffic.
The container called istio-proxy
now uses our custom Docker image, embedding our HTTP Filter, written in Lua.
An EnvoyFilter
has been added. It forwards access logs to curielogger
(see curiefense_access_logs_filter.yaml
).
An EnvoyFilter
has been added. It runs Curiefense's Lua code to inspect incoming traffic on the Ingress Gateways (see curiefense_lua_filter.yaml
).
Configuration variables in ~/curiefense-helm/istio-helm/charts/gateways/istio-ingress/values.yaml
can be modified or overridden to fit your deployment needs:
gw_image
defines the name of the image that contains our filtering code and modified Envoy binary.
curiesync_image
defines the name of the image that contains scripts that synchronize local Envoy configuration with the AWS S3 bucket defined in curieconf_manifest_url
.
curieconf_manifest_url
is the URL of the AWS S3 bucket that is used to synchronize configurations between the confserver
and the Curiefense Istio sidecars.
curiefense_namespace
should contain the name of the namespace where Curiefense components defined in ~/curiefense-helm/curiefense-helm/
are running.
redis_host
defines the hostname of the redis server that will be used by curieproxy
. Defaults to the provided redis StatefulSet. Override this to replace the redis instance with one you supply.
initial_curieconf_pull
defines whether a configuration should be pulled from the AWS S3 bucket before running Envoy (true
), or if traffic should be allowed to flow with a default configuration until the next synchronization (typically every 10s).
This page describes the tasks necessary to deploy Curiefense using Docker Compose. The tasks are described sequentially below:
During this process, you might find it helpful to read the descriptions (which include the purpose, secrets, and network/port details) of the services and their containers: Services and Container Images
If during this process you need to rebuild an image, see the instructions here: Building/Rebuilding an Image.
Clone the repository, if you have not already done so:
This documentation assumes it has been cloned to ~/curiefense
.
A Docker Compose deployment can use TLS for communication with Curiefense's UI server and also for the protected service, but this is optional. (If you do not choose to set it up, HTTPS will be disabled.)
If you do not want Curiefense to use TLS, then skip this step and proceed to the next section. Otherwise, generate the certificate(s) and key(s) now.
To enable TLS for the protected site/application, go to curiefense/deploy/compose/curiesecrets/curieproxy_ssl/
and do the following:
Edit site.crt
and add the certificate.
Edit site.key
and add the key.
To enable TLS for the nginx server that is used by uiserver
, go to curiefense/deploy/compose/curiesecrets/uiserver_ssl/
and do the following:
Edit ui.crt
and add the certificate.
Edit ui.key
and add the key.
Docker Compose deployments can be configured in two ways:
By setting values for variables in deploy/compose/.env
Or by setting OS environment variables (which will override any variables set in.env
)
These variables are described below.
Curiefense uses the storage defined here for synchronizing configuration changes between confserver
and the Curiefense sidecars.
By default, this points to the local_bucket
Docker volume:
For multi-node deployments, or to use S3 for a single node, replace this value with the URL of an S3 bucket:
In that case, you will need to supply AWS credentials in deploy/compose/curiesecrets/s3cfg
, following this template:
The address of the destination service for which Curiefense acts as a reverse proxy. By default, this points to the echo
container, which simply echoes the HTTP requests it receives.
Defaults to main
(the latest stable image, automatically built from the main
branch). To run a version that matches the contents of your working directory, use the following command:
Once the tasks above are completed, run these commands:
After deployment, the Echo service should be running and protected behind Curiefense. You can test the success of the deployment by querying it:
Also verify the following:
The UIServer is now available at http://localhost:30080
Grafana is now available at http://localhost:30300
The confserver
is now available at http://localhost:30000/api/v1/
To stop all containers and remove any persistent data stored in volumes, run the following commands:
This document describes how Curiefense can be integrated into an existing NGINX-based reverse proxy.
This guide describes a basic integration, and it cannot cover the wide variety of possible use cases and configurations. For specific questions about this, or other Curiefense-related topics, feel free to join our Slack at .
This page describes the installation of the Curiefense filtering component for an environment where NGINX is running in a container.
The other components of Curiefense will need to be installed separately, according to the specific instructions for each situation (e.g., and ). This can be done either before or after completing the instructions below.
If OpenResty is not installed yet, please follow the .
You will also need the , version 4 or 5. For example, on Ubuntu 20.04:
Next, build the Curiefense shared object. This needs to be done on a Linux system that runs the same major libc
and libhyperscan
versions as your NGINX server.
Then run the following:
Move the new curiefense.so
file on the build machine to this location on the proxy machine: /usr/local/openresty/luajit/lib/lua/5.1/curiefense.so
In the http block of the configuration, the following directives must be set:
For each server block that must be protected with Curiefense:
The default configuration does not block any requests, so the following steps should be performed to ensure proper integration:
Traffic should be served as usual.
No errors appear in the error logs.
JSON data appears in the access logs.
On the build machine, first .