How to publish an application in Kubernetes with Azure Pipelines, Nginx, Let’s Encrypt and Cloudflare?

Kat Lim Ruiz
Nerd For Tech
Published in
11 min readApr 15, 2020

--

This article will show you how to publish an ASPNET Core web application as a Docker container and running in Kubernetes with automatic generation of its certificate.

— In Juntoz.com, we are currently building our V3 platform and this is part of our journey.

Prerequisites

  • You need to install Docker locally.
  • You need to have a Kubernetes cluster up and running. In our case we use Azure Kubernetes Services.
  • The cluster must have helm installed.
  • Installkubectl in your computer.
  • Installhelm client in your computer.

Application

The web application is an ASPNET Core 3.1 web application built with Visual Studio 2019. It doesn’t matter what is inside since it will be containerized 👊, but as you will see in the Dockerfile, it is a regular application with some endpoints and that’s it. It is enough for the purpose of this article.

Pipeline definition

We are hosting this pipeline in Azure Pipelines, therefore we will create an azure-pipelines.yml containing all the steps needed to build, and deploy this.

variables:
- name: imageName
value: my-app
- name: kub-pod-tag
value: $(Build.BuildNumber)
stages:
- stage: Build
jobs:
- job: Build
pool:
vmImage: Ubuntu-16.04
steps:
# optional step only when you publish to private docker registry
- task: Docker@2
displayName: Docker Login to ACR
inputs:
command: login
containerRegistry: $(my-private-registry)
- task: Docker@2
displayName: Docker Build and Push
inputs:
command: buildAndPush
repository: $(imageName)
tags: $(kub-pod-tag)
- task: CopyFiles@2
displayName: stage deploy files
inputs:
contents: $(build.sourcesDirectory)/k8s*.*
flattenFolders: true
targetFolder: $(build.artifactStagingDirectory)
- task: PublishBuildArtifacts@1
displayName: publish output folder as this build artifact
inputs:
pathtoPublish: $(build.artifactStagingDirectory)
artifactName: drop
- stage: Staging
jobs:
- deployment: Staging
variables:
- name: kub-pod-instancecount
value: 1
- name: envName
value: Staging # follow aspnetcore convention
pool:
vmImage: Ubuntu-16.04
environment: STG
strategy:
runOnce:
deploy:
steps:
- task: qetza.replacetokens.replacetokens-task.replacetokens@3
displayName: Replace tokens in **/*
inputs:
rootDirectory: $(Pipeline.Workspace)/drop
targetFiles: '**/*.yml'
keepToken: true
tokenPrefix: __
tokenSuffix: __
- task: Kubernetes@1
displayName: kubectl apply
inputs:
connectionType: Azure Resource Manager
azureSubscriptionEndpoint: $(subscription)
azureResourceGroup: $(rg-stg)
kubernetesCluster: $(cluster-stg)
command: apply
arguments: -f $(Pipeline.Workspace)/drop
- stage: Production
jobs:
- deployment: Production
variables:
- name: kub-pod-instancecount
value: 1
- name: envName
value: Production # follow aspnetcore convention
pool:
vmImage: Ubuntu-16.04
environment: PROD
strategy:
runOnce:
deploy:
steps:
- task: qetza.replacetokens.replacetokens-task.replacetokens@3
displayName: Replace tokens in **/*
inputs:
rootDirectory: $(Pipeline.Workspace)/drop
targetFiles: '**/*.yml'
keepToken: true
tokenPrefix: __
tokenSuffix: __
- task: Kubernetes@1
displayName: kubectl apply
inputs:
connectionType: Azure Resource Manager
azureSubscriptionEndpoint: $(subscription)
azureResourceGroup: $(rg-prod)
kubernetesCluster: $(cluster-prod)
command: apply
arguments: -f $(Pipeline.Workspace)/drop

There are many concepts in this file I will try to explain each briefly (since each task would be a separate article probably).

About this file:

  • As you can see, it is divided in three stages: build, staging and production. Staging and Production are pretty much the same except for some variables that are changed.
  • Since the build generates a docker image, the output of the build cannot be that image, so we use the kubernetes manifests as output so they can be opened in the staging and production stages.
  • We are not currently using Helm, so you will see in our manifests some variable tags like __kub-pod-instancecount__ . These will be replaced with Azure Pipeline variables using the task qetza.replacetokens.replacetokens-task.replacetokens@3 . Some variables are coming from Azure Pipelines Library that is why you don’t see the assignment. Our intention is to use Helm in the future.
  • The line environment: PROD means that the stage will affect the given environment. Azure Pipelines allows to define environments which basically are just a way to group your deployments logically. Yet, they do have a very clever functionality called Approvals where you can configure it to require a manual approval (I will write another article about that). For this to execute, the environment has to be defined, or you can simply remove that line.

The application directory looks like this:

src
|--(other app files)
|--App.csproj
azure-pipelines.yml
Dockerfile
k8s-apply.yml

And this is the Dockerfile which is very basic implementation for an ASPNET Core application.

FROM mcr.microsoft.com/dotnet/core/sdk:3.1 as builder
WORKDIR /_build
COPY . ./
RUN dotnet restore src/App.csproj
RUN dotnet publish src/App.csproj -c Release -o out --no-restore
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=builder /_build/out ./
# we will expose http only because https will be managed in nginx
ENV ASPNETCORE_URLS=http://*:5000
EXPOSE 5000
ENTRYPOINT ["dotnet", "App.dll"]

About this file:

  • Restore and Publish is done in two steps because in our case we needed a nuget.config (not shown in this article) and this way, we can take advantage of Docker layering.
  • Http is exposed only, not https. This is because https will be terminated at the ingress.
  • The application will run in port 5000.

Deploy the application

First, in our case, we will use a single cluster for all our applications therefore we will follow some rules to properly deploy without mixups:

  1. Each application will have its own namespace. An application will require several resources therefore the namespace is needed to keep things organized.
  2. The cluster will have a global cert-manager since it can handle several secrets at the same time.
  3. Each application (therefore each namespace) will have one ingress controller, one ingress, one certificate issuer, one certificate, one deployment, and one service.

These rules are not necessarily for everyone, but they adapt to our needs.

Let’s look at the manifests, which need to run in order.

  • Create the namespace
# create namespace for this app
apiVersion: v1
kind: Namespace
metadata:
name: my-ns
  • Create the deployment and service so the application can be published. NOTE: this manifest starts to show the variables with (__) that are going to be replaced.
# deploy the application
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-dpy
namespace: my-ns
spec:
replicas: __kub-pod-instancecount__
selector:
matchLabels:
app: my-app
template:
metadata:
namespace: my-ns
labels:
app: my-app
spec:
containers:
- name: my-app
image: __my-private-registry__/my-app:__kub-pod-tag__
ports:
- containerPort: 5000
resources:
limits:
memory: "800Mi"
env:
- name: ASPNETCORE_ENVIRONMENT
value: __envName__
---
# create the service that contains the application
apiVersion: v1
kind: Service
metadata:
name: my-app-svc
namespace: my-ns
spec:
selector:
app: my-app
ports:
- port: 5000
targetPort: 5000

This is a regular deployment file yet some notes on this:

az aks update -n my-aks-cluster -g my-cluster-rg --attach-acr mydockreg
  • You should configure your limits on your pod. There is a whole science to it. This is not the scope of this article.
  • Here you can also setup the health and ready probes. These are key for healthy scaling.

With these two, the application should start running at least in http:5000.

Now let’s protect this with a SSL certificate.

Certificate

We will use Let’s Encrypt to generate an automatic certificate that expires in 90 days and we will use cert-manager to renew it automatically. Obviously this is not for all companies since some still prefer buying their own certificate from a CA. Cert-manager allows to use a CA certificate too (https://cert-manager.io/docs/configuration/ca).

One of the steps needed to generate the certificate is to verify the ownership of the certificate domain, and one of the methods that we can use is through Cloudflare Api which happens to be also our DNS provider. There are other methods to do that.

The first step is to install cert-manager . We will use helm for this and you can follow this guide:

https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm

As stated in the rules, you need to install cert-manager globally (it will be located in its own namespace).

After this step, you need to apply these manifests:

  • Create the secret containing the Cloudflare api token.
# insert secret to modify dns settings in cloudflare
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-api-token
namespace: my-ns
type: Opaque
stringData:
api-token: XXXX
---

To generate the api token, you can follow these steps in Cloudflare:

  1. Go to your account, and go to your domain Overview tab.
  2. On the right, go to Generate Api Token.
  3. On the grid Api Tokens, click on Create New. NOTE: Api Key are more powerful so you do not want to use them for this.
  4. Create your api token with:

Two permissions (this will allow cert-manager to update the DNS settings to create a temporary record to validate ownership).

  • Zone/Zone/Read
  • Zone/DNS/Edit

Zone resources (if you have more than one domain, you could specify to only include that)

  • Include All zones

Set TTL expiration

5. Cloudflare will generate a token string which you need to insert in your manifests. This token is only shown once, therefore store it safely.

6. Important: The token is generated for that user account where it came from. Therefore, make sure you will have access to that account in the future.

This token will allow cert-manager to update the DNS settings of your Cloudflare domain to verify ownership.

  • Create the Certificate and Issuer

The issuer is the resource that will connect to Cloudflare to validate ownership.

The certificate is the resource representing the TLS certificate, expiration date, domain, organization, etc.

# create the certificate issuer that connects to ACME and generates a certificate
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: my-app-cert-issuer
namespace: my-ns
spec:
acme:
email: $(acme-email)
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: my-app-cert-issuer-privkey
solvers:
- dns01:
cloudflare:
email: $(cloudflare-account)
apiTokenSecretRef:
# this token need to be linked to the cloudflare.email account
name: cloudflare-api-token
key: api-token
---
# using the cert issuer, it is downloaded and stored as a secret
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: my-app-cert-info
namespace: my-ns
spec:
secretName: my-app-cert-tls
duration: 2160h # 90d
renewBefore: 24h # 1d
organization:
- $(acme-org)
dnsNames:
- $(my-app-cert-domain)
issuerRef:
name: my-app-cert-issuer

About this file:

  • acme-email should be the account name in ACME. If you have several domains, probably you want to all share the same account name.
  • cloudflare-account is the email of the Cloudflare account that owns the api token.
  • You can set the certificate duration (max 90 days) and when it will renew (e.g. 24h before it expires).
  • acme-org is the name of the organization that owns the certificate (your company name).
  • my-app-cert-domain is the domain protected by the certificate. It is in this setting where you set either a wildcard domain (*.myapp.com) or a naked domain (myapp.com) depending on your needs.
  • Final note, this is the part that took me the most to research. Follow instructions from cert-manager documentation carefully, and make sure the names are carefully set. I’ve tried to name them the best I could so they show what they really mean. I’m no expert on cert-manager nor on Kubernetes, therefore if you might find something off, please let me know.

Ingress

Now that we have the application running, and the certificate generated, then we must create an Ingress so it opens the https protocol (remember that the application will only serve on http, and the ingress will do TLS termination).

I’m not going to discuss what is an Ingress, but basically represents a public service that will be a load balancer L7 for the given application.

For our case we will use NGINX open source.

The first step to install the Nginx-Ingress Controller which acts as the core of the ingress. To install, I took this guide as basis (https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm). NOTE: since there are some steps we want to customize, I will copy all the steps here, however, do refer to the nginx guide in the future and then customize per below.

  • Install nginx helm repo
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update
  • Install the ingress controller
helm install my-app nginx-stable/nginx-ingress --namespace my-ns --set controller.ingressClass=nginx-my-app

Note that we are going to install in this app namespace and we will customize the ingress class (since there will be one ingress per application).

NOTE: As per the rules defined before, we will create an ingress for each namespace and application. I believe this is better since our application may have different domains and characteristics. However, you can certainly run a single ingress for all your apps.

  • Install the ingress

To install the ingress itself run this manifest:

# set the ingress load balancer (nginx) and configure the rules here
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-ing
namespace: my-ns
annotations:
kubernetes.io/ingress.class: nginx-my-app
cert-manager.io/issuer: my-app-cert-issuer
spec:
rules:
- host: $(my-app-domain)
http:
paths:
- backend:
serviceName: my-app-svc
servicePort: 5000
path: /
tls:
- hosts:
- $(my-app-domain)
secretName: my-app-cert-tls-use

About this file:

  • The link between the ingress and the controller is done by the annotation kubernetes.io/ingress.class: nginx-my-app . So they have to match. And this is also the key to have many ingress in the same cluster.
  • This connects to the Issuer defined before: my-app-cert-issuer .
  • my-app-domain is the domain the ingress will listen to and then it will connect to my-app-svc to serve the request.
  • About the tls, use the certificate domain, and then the secret name has to be also a name that does not match to any of the secrets set before. It will be a new secret that the ingress will use to store the tls certificate.

Once you run all the commands and manifests shown here, your app should be up and running. Kubernetes will assign an external IP to the ingress service, but it might take some seconds to do so.

You can get this IP thru the command:

kubectl get service,ingress -n my-ns

And you will get something like this:

NAME            TYPE       CLUSTER-IP     EXTERNAL-IP    PORT(S)                      
service/my-app-nginx-ingress LoadBalancer X.X.X.X E.E.E.E 80:XXXX/TCP,443:XXX/TCP
service/my-app-svc ClusterIP X.X.X.X <none> 5000/TCP

NAME HOSTS ADDRESS PORTS
ingress.extensions/my-app-ing myapp.com E.E.E.E 80, 443

Where E.E.E.E is your public IP.

Change your DNS records

Typically you will want to map your application to a domain for easy access, so to do that you need to create an A record in your DNS and assign the external IP address of the ingress.

Cloudflare is so awesome that your DNS changes reflects in 2 or 3 seconds 😃.

With all steps taken here, you should have a web application up and running with free never-ending certificates and with a load balancer for scale.

Some additional notes

  • At first I was going to have two Dockerfile: one for the web application, and one for nginx to act as a reverse proxy. However, an ingress solution is much more scalable (although you cannot configure as much as you would like for more advanced scenarios).
  • We are currently using nginx open source. If you can pay for Nginx Plus, you should and you need to change your helm command when installing the nginx controller. The given link shows that parameter too.
  • To create a web application that is enough for this article, just open Visual Studio 2019, New Project, ASPNET Web Core Application, NetCore 3.1 , MVC application, remove the HTTPS, Do not enable Docker support. It will create a web application with some paths already working.
  • In this case, we are NOT using Nginx as a web server, but as an Ingress. Therefore it will only allow load balancer capabilities. If you need to run it as a web server, probably you would have to do the dual Dockerfile as a sidecar pattern.

Happy coding!

--

--

Kat Lim Ruiz
Nerd For Tech

Software Engineer, father, technology enthusiast, agilist, INTJ, Developer, Mini-Devops.