Simple Canary with Affinity on K8s

The Idea

To build a simple way of deploy some kind of canary with a consistent user experience on top of Kubernetes.

We have just a stable version and a canary one for our app.

Idea based on:

The solution

This solution is built using this code.

The idea is to put a reverse proxy between ingress controller and app services that handles (via lua scripts):

  • a weight to split traffic into stable/canary and
  • a cookie to allow sticky session (e.g. a user that hit stable will always hit stable, the same for canary)

For this workshop we will use minikube.

Minikube

For get Minikube working for this project you must:

  • Set up minikube
  • Set up minikube’s ingress controller
  • Build the images on minikube’s docker
  • Modify image names in yaml files based on 2

Set up

Just go to minikube and do the thing.

Ingress Controller

Basically:

minikube addons enable ingress

More info here.

Build the image on minikube’s docker

To build your images on minikube’s docker first you need to set the environment (each terminal you run must be set):

eval $(minikube docker-env)

Now you can build your image as usual. (docker build…)

Must change the imagePullPolicy in your yamls to:

    imagePullPolicy: Never

This way K8s will get the local image, otherwise K8s will try to pull it and fail.

Let’s the canary sing

Now, get our hands dirty….

The app

We will use a simple Go app to test our canary deployment.

package main

import (
	"fmt"
	"net/http"
)

const version string = "1.0"

func getFrontpage(w http.ResponseWriter, r *http.Request) {
	fmt.Fprintf(w, "Congratulations! Version %s of your application is running on Kubernetes.", version)
}

func health(w http.ResponseWriter, r *http.Request) {
	w.WriteHeader(http.StatusOK)
}

func getVersion(w http.ResponseWriter, r *http.Request) {
	fmt.Fprintf(w, "%s\n", version)
}

func main() {
	http.HandleFunc("/", getFrontpage)
	http.HandleFunc("/health", health)
	http.HandleFunc("/version", getVersion)
	http.ListenAndServe(":8080", nil)
}

This app exposes a very simple API, so when we hit http://urlapp/version it will answer with the version. This is the code for version 1.0.

The app for version 2.0 is almost the same, changing just this line:

const version string = "2.0"

To compile the app cd into each app (version 1.0 and 2.0) and run:

GOOS=linux GOARCH=amd64 go build -tags netgo -o app

This will create a file called app in each directory.

The image

This is the Dockerfile:

FROM alpine:latest
MAINTAINER Juan Matias Kungfu de la Camara Beovide "juan.delacamara@3xmgroup.com"

# In case no version is passed
ARG version=1.0
COPY source/$version/app /app
EXPOSE 8080
ENTRYPOINT ["/app"]

We’re setting version arg, but later we will overwrite it with the cli.

So, let’s build the image, cd into example directory (where your Dockerfile is) and build:

export app_version=1.0
docker build --build-arg version=$app_version -t canary-app:$app_version .

And for version 2.0:

export app_version=2.0
docker build --build-arg version=$app_version -t canary-app:$app_version .

So, this is it, we now have our images:

docker images | grep canary

canary-app                                                       2.0                                                                c4fa861bee21        6 days ago          12MB
canary-app                                                       1.0                                                                c7c11454f439        6 days ago          12MB

The deploy

Now let’s create a deploy yaml for our apps. For stable version:

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: kubeapp-production
spec:
  replicas: 3
  template:
    metadata:
      name: kubeapp
      labels:
        app: kubeapp
        env: prod
    spec:
      containers:
      - name: kubeapp
        image: canary-app:1.0
        imagePullPolicy: Never
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
        command: ["/app"]
        ports:
        - name: kubeapp
          containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
  name: kubeapp-production-service
  labels:
    app: kubeapp
    env: prod
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: kubeapp
    env: prod

We are setting 3 replicas using the following image and labels:

    image: canary-app:1.0

    app: kubeapp
    env: prod

It also creates a service to access our app called kubeapp-production-service.

For canary this is the file:

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: kubeapp-canary
spec:
  replicas: 1
  template:
    metadata:
      name: kubeapp-canary
      labels:
        app: kubeapp
        env: canary
    spec:
      containers:
      - name: kubeapp
        image: canary-app:2.0
        imagePullPolicy: Never
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
        command: ["/app"]
        ports:
        - name: kubeapp
          containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
  name: kubeapp-canary-service
  labels:
    app: kubeapp
    env: canary
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: kubeapp
    env: canary

Almost the same, but image and labels are (also name changes):

image: canary-app:2.0

app: kubeapp
env: canary

Service is called kubeapp-canary-service.

Note: If you will deploy it to a cluster other than minikube you can use my already built images at docker.io/juanmatias/canary-app:1.0 and docker.io/juanmatias/canary-app:2.0 (set imagePullPolicy to other value than Never)

Deploy it!

Let’s create a namespace to work in and deploy our apps:

kubectl create ns canary
kubectl apply -n canary -f deploy-stable.yaml -f deploy-canary.yaml  

Ok there you have the pods and services:

kubectl get po -n canary && kubectl get svc -n canary
NAME                                  READY   STATUS    RESTARTS   AGE
kubeapp-canary-876974976-nwbxv        1/1     Running   0          14s
kubeapp-production-676b5f5f6c-qg4hd   1/1     Running   0          14s
kubeapp-production-676b5f5f6c-thn2c   1/1     Running   0          7s
kubeapp-production-676b5f5f6c-vthxl   1/1     Running   0          14s
NAME                         TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubeapp-canary-service       NodePort   10.109.77.166    <none>        80:31223/TCP   5m
kubeapp-production-service   NodePort   10.109.245.231   <none>        80:31224/TCP   5m

1 canary instance and 3 stable instances.

The reverse proxy (where the canary lives)

Here it is the affinity-with-the-canary.yaml file:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginxcan-config
data:
  default.conf: |
    # SERVICE DEFINITION
    # ##################

      server {

        # Enable logging if needed
        # error_log    /var/log/nginx/default.error.log debug;

        # Set DNS resolver for K8s cluster (change if needed)
        resolver kube-dns.kube-system.svc.cluster.local;

        # The IP that you forwarded in your router (nginx proxy)
        listen  80 default_server;

        # Make site accessible from http://localhost/
        server_name _;


        # Location to work with (aka path asked to server)
       location / {

            # set the base service url and svc names
            # Base url (e.g. ".canary.svc.cluster.local") is composed by namespace (canary) and cluster svc address. You later must prepend svc name.
            set $base_url "";
            set_by_lua $base_url 'return os.getenv("BASE_SVC_URL")';
            set $canary_svc_name "";
            set_by_lua $canary_svc_name 'return os.getenv("CANARY_SVC_NAME")';
            set $stable_svc_name "";
            set_by_lua $stable_svc_name 'return os.getenv("STABLE_SVC_NAME")';
            set $cookie_max_age "";
            set_by_lua $cookie_max_age 'return os.getenv("COOKIE_MAX_AGE")';
            set $upstream_srv "";

           # I think I saw a kitten, said the canary
           # This block will rewerite upstream_srv var to stable or canary on the following rules:
             # if cookie is set honor it and set what is set in it
             # if no cookie then randomize access according to the weight that has been set
           # Cookies is set with a max-age of 2 days (172800 seconds)
           rewrite_by_lua_block {
             local weight = tonumber(os.getenv("CANARY_WEIGHT"))
             local upstream_srv = ngx.var.stable_svc_name .. ngx.var.base_url
             local environ = ""
             if(weight > 0)
             then
               if(ngx.var["cookie_Can"])
               then
                 environ = ngx.var["cookie_Can"]
               else
                 local myrand = math.random()
                 if(myrand <= weight)
                 then
                   environ = "canary"
                 else
                   environ = "stable"
                 end
               end

               if(environ == "canary")
               then
                 upstream_srv = ngx.var.canary_svc_name .. ngx.var.base_url
               else
                 upstream_srv = ngx.var.stable_svc_name .. ngx.var.base_url
               end
             else
               upstream_srv = ngx.var.stable_svc_name .. ngx.var.base_url
               environ = "stable"
             end

             ngx.var.upstream_srv = upstream_srv
             ngx.header['Set-Cookie'] = "Can=" .. environ .. ";Path=/;Max-Age=" .. ngx.var.cookie_max_age
           }

           # July! Do the thing!
           proxy_pass_header Authorization;
           proxy_pass http://$upstream_srv;
           proxy_set_header Host $host;
           proxy_set_header X-Real-IP $remote_addr;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_http_version 1.1;
           proxy_set_header Connection "";
           proxy_buffering off;
           client_max_body_size 0;
           proxy_read_timeout 36000s;
           proxy_redirect off;

       }
      }
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginxcan-mainconfig
data:
  nginx.conf: |
    user  nginx;
    worker_processes  1;

    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;

    env BASE_SVC_URL;
    env CANARY_SVC_NAME;
    env STABLE_SVC_NAME;
    env CANARY_WEIGHT;
    env COOKIE_MAX_AGE;

    events {
        worker_connections  1024;
    }


    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;

        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';

        access_log  /var/log/nginx/access.log  main;

        sendfile        on;
        #tcp_nopush     on;

        keepalive_timeout  65;

        #gzip  on;

        include /etc/nginx/conf.d/*.conf;
    }

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginxcan-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginxcan
    spec:
      containers:
      # - image:
      - image: firesh/nginx-lua:alpine
        imagePullPolicy: IfNotPresent
        name: nginxcan
        command: ["nginx"]
        args: ["-c","/etc/nginx/customconfig/nginx.conf", "-g", "daemon off;"]
        env:
        - name: BASE_SVC_URL
          value: ".canary.svc.cluster.local"
        - name: STABLE_SVC_NAME
          value: kubeapp-production-service
        - name: CANARY_SVC_NAME
          value: kubeapp-canary-service
        - name: CANARY_WEIGHT
          value: "0.5"
        - name: COOKIE_MAX_AGE
          value: "172800"
        ports:
        - containerPort: 80
        volumeMounts:
        - name: nginxcan-configs
          mountPath: /etc/nginx/conf.d
        - name: nginxcan-mainconfig
          mountPath: /etc/nginx/customconfig
      # Load the configuration files for nginx
      volumes:
        - name: nginxcan-configs
          configMap:
            name: nginxcan-config
        - name: nginxcan-mainconfig
          configMap:
            name: nginxcan-mainconfig
---
apiVersion: v1
kind: Service
metadata:
  name: nginxcan-service
spec:
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
  selector:
    app: nginxcan

Here we’re creating the reverse proxy and a service to expose it called nginxcan-service.

You can dive deeper into the logic, but for now we’ll just focus on these envvars:

        - name: BASE_SVC_URL
          value: ".canary.svc.cluster.local"
        - name: STABLE_SVC_NAME
          value: kubeapp-production-service
        - name: CANARY_SVC_NAME
          value: kubeapp-canary-service
        - name: CANARY_WEIGHT
          value: "0.5"
        - name: COOKIE_MAX_AGE
          value: "172800"

BASE_SVC_URLis the base url to call our services. To avoid issues resolving the name we need to specify the whole service address. So our base url is composed of namespace and cluster id : .canary.svc.cluster.local. (don’t forget the dot at the beginning)

Then STABLE_SVC_NAME and CANARY_SVC_NAME are the app service names. (we’ve set it in the deployment)

The CANARY_WEIGHT set the % of traffic sent to canary.

Finally, COOKIE_MAX_AGE set the max-age for the cookie.

This is the logic when a request incomes:

  • does a cookie exist?
  • if no
    • a random number is created between 0 and 1
    • if number <= CANARY_WEIGHT then request sent to canary version
    • if number > CANARY_WEIGHT then request sent to stable version
  • if yes
    • get from cookie previous set version
    • request sent to such version
  • set the cookie

Deploy the affinity-with-the-canary:

kubectl apply -n canary -f affinity-with-the-canary.yaml

Ok, now we can see alongside our app the reverse proxy and its service:

kubectl get po -n canary && kubectl get svc -n canary
NAME                                  READY   STATUS    RESTARTS   AGE
kubeapp-canary-876974976-nwbxv        1/1     Running   0          16m
kubeapp-production-676b5f5f6c-qg4hd   1/1     Running   0          16m
kubeapp-production-676b5f5f6c-thn2c   1/1     Running   0          16m
kubeapp-production-676b5f5f6c-vthxl   1/1     Running   0          16m
nginxcan-deployment-b8f48c579-jhq9r   1/1     Running   0          33s
NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubeapp-canary-service       NodePort    10.109.77.166    <none>        80:31223/TCP   21m
kubeapp-production-service   NodePort    10.109.245.231   <none>        80:31224/TCP   21m
nginxcan-service             ClusterIP   10.111.52.253    <none>        80/TCP         33s

The logic is that requests will hit nginxcan-service and the reverse proxy will send them to kubeapp-canary-service or kubeapp-production-service.

The ingress

Well, we now need a way to access our app, for this we will use an ingress. This is ingress.yaml file:

kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: app-ingress
spec:
  backend:
    serviceName: nginxcan-service
    servicePort: 80

Deploy it:

kubectl apply -n canary -f ingress.yaml

Test it

Get minikube ingress ip:

kubectl --namespace=canary get ingress/app-ingress --output=json | jq -r '.status.loadBalancer.ingress[0].ip'

This will give you an IP. Let hit it from your browser and watch your cookies:

Once you get a cookie for stable or canary you will be always hitting that version, so delete the cookie and hit it again to test it.



That’s it, now you can play more with this solution.

Play more with the canary

Try changing the canary weight (you need to redeploy the reverse proxy).

Try to put weight on 0.

Change your service names.

Happy flying, canaries!

One thought on “Simple Canary with Affinity on K8s

Add yours

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.

Up ↑

%d bloggers like this: