To create a kubernetes service using a NodeJS server, you've probably used an express server, and configured the deployment and the service in kubernetes, and you're done!
But, wait...
What about stability?
What about upgrade?
You probably want kubernetes to restart your NodeJS application if its failing. Will it?
You probably want kubernetes to stop the old version of the application only after the new version deployment is ready. Will it?
This is where kubernetes liveness and readiness probes come into the rescue.
Let's review these probes.
Liveness Probe
Well, that is obvious.
But why do we really need to do anything here?
Won't kubernetes find that our process is down, and automatically restart it?
The answer is yes, but this is not the correct question.
What if our process is up, but it is stuck, and not responding to new requests?
This is where we need to assist kubernetes to find this problem, and restart our pod.
We can do this by implementing a handler to a specific health related URL.
Readiness Probe
We already have a liveness probe.
Why do we need another?
Actually we don't have to include readiness probe in all services.
The readiness probe should be included in case you want kubernetes not to include the pod in the service, because something is not ready, but still avoid from restarting the pod.
A classical example is dependencies:
- Service A require service B for its operation.
- Service A is up and running, but it cannot serve its client since service B cannot be accessed.
- Restarting service A will not fix the problem.
Implementation Example
This is an example for a NodeJS code to implement liveness and readiness probes.
const express = require('express'); const server = express(); server.get('/probe/live', (req, res) => { res.status(200).send("ALIVE"); }); server.get('/probe/ready', async (req, res) => { if (myDependenciesAreOk()){ res.status(200).send("READY"); } else{ res.status(500).send("NOT READY"); } }); server.listen(8080);
To use these probes, we need to configure the kubernetes deployment.
Note the parameters for each probe should be considered to avoid high impact on the deployment from the one hand, and prevent clients from reaching to unavailable service on the other hand. (See some guidelines in here)
apiVersion: apps/v1 kind: Deployment metadata: name: my-deployment spec: replicas: 2 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-app image: my-app:latest livenessProbe: timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 httpGet: path: /probe/live port: 8080 readinessProbe: timeoutSeconds: 5 successThreshold: 1 failureThreshold: 1 initialDelaySeconds: 10 periodSeconds: 10 httpGet: path: /probe/ready port: 8080
No comments:
Post a Comment