This guide demonstrates how to perform Canary rollouts using the SMI Traffic Split configuration.
Prerequisites
- Kubernetes cluster running Kubernetes v1.20.0 or greater.
- Have OSM installed.
- Have
kubectl
available to interact with the API server. - Have
osm
CLI available for managing the service mesh.
Demo
In this demo, we will deploy an HTTP application and perform a canary rollout where a new version of the application is deployed to serve a percentage of traffic directed to the service.
To split traffic to multiple service backends, the SMI Traffic Split API will be used. More about the usage of this API can be found in the traffic split guide. For client applications to transparently split traffic to multiple service backends, it is important to note that client applications must direct traffic to the FQDN of the root service referenced in a TrafficSplit
resource. In this demo, the curl
client will direct traffic to the httpbin
root service, initially backed by version v1
of the service, and then perform a canary rollout to direct a percentage of traffic to version v2
of the service.
The following steps demonstrate the canary rollout deployment strategy.
Note: Permissive traffic policy mode is enabled to avoid the need to create explicit access control policies.
-
Enable permissive mode
osm_namespace=osm-system # Replace osm-system with the namespace where OSM is installed kubectl patch meshconfig osm-mesh-config -n "$osm_namespace" -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge
-
Deploy the
curl
client into thecurl
namespace after enrolling its namespace to the mesh.# Create the curl namespace kubectl create namespace curl # Add the namespace to the mesh osm namespace add curl # Deploy curl client in the curl namespace kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/main/manifests/samples/curl/curl.yaml -n curl
Confirm the
curl
client pod is up and running.$ kubectl get pods -n curl NAME READY STATUS RESTARTS AGE curl-54ccc6954c-9rlvp 2/2 Running 0 20s
-
Create the root
httpbin
service that clients will direct traffic to. The service has the selectorapp: httpbin
.# Create the httpbin namespace kubectl create namespace httpbin # Add the namespace to the mesh osm namespace add httpbin # Create the httpbin root service and service account kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/main/manifests/samples/canary/httpbin.yaml -n httpbin
-
Deploy version
v1
of thehttpbin
service. The servicehttpbin-v1
has the selectorapp: httpbin, version: v1
, and the deploymenthttpbin-v1
has the labelsapp: httpbin, version: v1
matching the selector of both thehttpbin
root service andhttpbin-v1
service.kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/main/manifests/samples/canary/httpbin-v1.yaml -n httpbin
-
Create an SMI TrafficSplit resource that directs all traffic to the
httpbin-v1
service.kubectl apply -f - <<EOF apiVersion: split.smi-spec.io/v1alpha2 kind: TrafficSplit metadata: name: http-split namespace: httpbin spec: service: httpbin.httpbin.svc.cluster.local backends: - service: httpbin-v1 weight: 100 EOF
-
Confirm all traffic directed to the root service FQDN
httpbin.httpbin.svc.cluster.local
is routed to thehttpbin-v1
pod. This can be verified by inspecting the HTTP response headers and confirming that the request succeeds and the pod displayed corresponds tohttpbin-v1
.for i in {1..10}; do kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -sI http://httpbin.httpbin:14001/json | egrep 'HTTP|pod'; done HTTP/1.1 200 OK pod: httpbin-v1-77c99dccc9-q2gvt HTTP/1.1 200 OK pod: httpbin-v1-77c99dccc9-q2gvt HTTP/1.1 200 OK pod: httpbin-v1-77c99dccc9-q2gvt HTTP/1.1 200 OK pod: httpbin-v1-77c99dccc9-q2gvt HTTP/1.1 200 OK pod: httpbin-v1-77c99dccc9-q2gvt HTTP/1.1 200 OK pod: httpbin-v1-77c99dccc9-q2gvt HTTP/1.1 200 OK pod: httpbin-v1-77c99dccc9-q2gvt HTTP/1.1 200 OK pod: httpbin-v1-77c99dccc9-q2gvt HTTP/1.1 200 OK pod: httpbin-v1-77c99dccc9-q2gvt HTTP/1.1 200 OK pod: httpbin-v1-77c99dccc9-q2gvt
The above output indicates all 10 requests returned HTTP 200 OK, and were responded by the
httpbin-v1
pod. -
Prepare the canary rollout by deploying version
v2
of thehttpbin
service. The servicehttpbin-v2
has the selectorapp: httpbin, version: v2
, and the deploymenthttpbin-v2
has the labelsapp: httpbin, version: v2
matching the selector of both thehttpbin
root service andhttpbin-v2
service.kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/main/manifests/samples/canary/httpbin-v2.yaml -n httpbin
-
Perform the canary rollout by updating the SMI TrafficSplit resource to split traffic directed to the root service FQDN
httpbin.httpbin.svc.cluster.local
to both thehttpbin-v1
andhttpbin-v2
services, fronting thev1
andv2
versions of thehttpbin
service respectively. We will distribute the weight equally to demonstrate traffic splitting.kubectl apply -f - <<EOF apiVersion: split.smi-spec.io/v1alpha2 kind: TrafficSplit metadata: name: http-split namespace: httpbin spec: service: httpbin.httpbin.svc.cluster.local backends: - service: httpbin-v1 weight: 50 - service: httpbin-v2 weight: 50 EOF
-
Confirm traffic is split proportional to the weights assigned to the backend services. Since we configured a weight of
50
for bothv1
andv2
, requests should be load balanced to both the versions as seen below.$ for i in {1..10}; do kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -sI http://httpbin.httpbin:14001/json | egrep 'HTTP|pod'; done HTTP/1.1 200 OK pod: httpbin-v2-6b48697db-cdqld HTTP/1.1 200 OK pod: httpbin-v1-77c99dccc9-q2gvt HTTP/1.1 200 OK pod: httpbin-v1-77c99dccc9-q2gvt HTTP/1.1 200 OK pod: httpbin-v1-77c99dccc9-q2gvt HTTP/1.1 200 OK pod: httpbin-v2-6b48697db-cdqld HTTP/1.1 200 OK pod: httpbin-v2-6b48697db-cdqld HTTP/1.1 200 OK pod: httpbin-v1-77c99dccc9-q2gvt HTTP/1.1 200 OK pod: httpbin-v2-6b48697db-cdqld HTTP/1.1 200 OK pod: httpbin-v2-6b48697db-cdqld HTTP/1.1 200 OK pod: httpbin-v1-77c99dccc9-q2gvt
The above output indicates all 10 requests returned an HTTP 200 OK, and both
httpbin-v1
andhttpbin-v2
pods responsed to 5 requests each based on the weight assigned to them in theTrafficSplit
configuration.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.