Full Blog TOC

Full Blog Table Of Content with Keywords Available HERE

Tuesday, October 29, 2024

Grafana K6


 

In this post we review using Grafana K6, a tool for testing applications, using a java script like scripts.

We will use the following test file as our test:

test.js

import http from 'k6/http';
import {check, sleep} from 'k6';

export const options = {
vus: 2,
duration: '5s',
};
export default function () {
const url = 'http://test.k6.io/login';
const payload = JSON.stringify({
email: 'aaa',
password: 'bbb',
});

const params = {
headers: {
'Content-Type': 'application/json',
},
};

const result = http.post(url, payload, params);
check(result, {
'is status 200': (r) => r.status === 200,
});
sleep(1);
}



And we will review 2 methods of running the tool: docker and kubernetes.


Running in docker

To run in docker, we use the following dockerfile

Dockerfile

FROM grafana/k6
WORKDIR /usr/src/app
COPY . .
CMD [ "k6", "run", "test.js" ]


Next we build and run the docker

#!/usr/bin/env bash

set -e
cd "$(dirname "$0")"

docker build . -t my-k6
docker run --rm my-k6 run test.js


The output is:

[+] Building 1.1s (8/8) FINISHED                                                                                                                                  docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 114B 0.0s
=> [internal] load metadata for docker.io/grafana/k6:latest 0.7s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [1/3] FROM docker.io/grafana/k6:latest@sha256:d39047ea6c5981ac0abacec2ea32389f22a7aa68bc8902c08b356cc5dd74aac9 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 904B 0.0s
=> CACHED [2/3] WORKDIR /usr/src/app 0.0s
=> [3/3] COPY . . 0.1s
=> exporting to image 0.1s
=> => exporting layers 0.0s
=> => writing image sha256:dd14bff333a7714cf09ff4c60a6ae820174bb575404ed4c63acf871a27148878 0.0s
=> => naming to docker.io/library/my-k6 0.0s

/\ Grafana /‾‾/
/\ / \ |\ __ / /
/ \/ \ | |/ / / ‾‾\
/ \ | ( | (‾) |
/ __________ \ |_|\_\ \_____/

execution: local
script: test.js
output: -

scenarios: (100.00%) 1 scenario, 2 max VUs, 35s max duration (incl. graceful stop):
* default: 2 looping VUs for 5s (gracefulStop: 30s)


running (01.0s), 2/2 VUs, 0 complete and 0 interrupted iterations
default [ 20% ] 2 VUs 1.0s/5s

running (02.0s), 2/2 VUs, 0 complete and 0 interrupted iterations
default [ 40% ] 2 VUs 2.0s/5s

running (03.0s), 2/2 VUs, 2 complete and 0 interrupted iterations
default [ 60% ] 2 VUs 3.0s/5s

running (04.0s), 2/2 VUs, 4 complete and 0 interrupted iterations
default [ 80% ] 2 VUs 4.0s/5s

running (05.0s), 2/2 VUs, 6 complete and 0 interrupted iterations
default [ 100% ] 2 VUs 5s

running (06.0s), 2/2 VUs, 6 complete and 0 interrupted iterations
default ↓ [ 100% ] 2 VUs 5s

✗ is status 200
0% — ✓ 0 / ✗ 8

checks.........................: 0.00% 0 out of 8
data_received..................: 16 kB 2.6 kB/s
data_sent......................: 3.8 kB 605 B/s
http_req_blocked...............: avg=75.92ms min=4.21µs med=17.25µs max=384.4ms p(90)=302.09ms p(95)=379.63ms
http_req_connecting............: avg=176.26µs min=0s med=0s max=717.99µs p(90)=703.73µs p(95)=707.57µs
http_req_duration..............: avg=191.36ms min=150.99ms med=156.8ms max=354.78ms p(90)=252.73ms p(95)=278.68ms
{ expected_response:true }...: avg=202.91ms min=153.28ms med=167.17ms max=354.78ms p(90)=283.75ms p(95)=319.27ms
http_req_failed................: 50.00% 8 out of 16
http_req_receiving.............: avg=155.79µs min=77.92µs med=154.46µs max=292.35µs p(90)=237.78µs p(95)=269.18µs
http_req_sending...............: avg=147.67µs min=27.02µs med=81.32µs max=533.87µs p(90)=296.64µs p(95)=377.56µs
http_req_tls_handshaking.......: avg=47.53ms min=0s med=0s max=383.46ms p(90)=188.55ms p(95)=378.7ms
http_req_waiting...............: avg=191.06ms min=150.59ms med=156.55ms max=353.98ms p(90)=252.5ms p(95)=278.36ms
http_reqs......................: 16 2.55197/s
iteration_duration.............: avg=1.53s min=1.3s med=1.33s max=2.21s p(90)=2.1s p(95)=2.16s
iterations.....................: 8 1.275985/s
vus............................: 2 min=2 max=2
vus_max........................: 2 min=2 max=2


running (06.3s), 0/2 VUs, 8 complete and 0 interrupted iterations
default ✓ [ 100% ] 2 VUs 5s


Running in kubernetes

To run in kubernetes, we deploy a k6 operator which will create jobs for test runs.


curl https://raw.githubusercontent.com/grafana/k6-operator/main/bundle.yaml | kubectl apply -f -


The outputs from the testrun in this case are sent to prometheus, so we create a dedicated image that handles the write to promethues.

Dockerfile

FROM golang:1.20 AS builder

RUN go install go.k6.io/xk6/cmd/xk6@latest

RUN xk6 build \
--with github.com/grafana/xk6-output-prometheus-remote@latest \
--output /k6

FROM grafana/k6:latest
COPY --from=builder /k6 /usr/bin/k6


Build the test runner image:

docker build -t k6-extended:local .
kind load docker-image k6-extended:local


We use the TestRun CRD to configure the test run.


k8s_testrun.yml

apiVersion: k6.io/v1alpha1
kind: TestRun
metadata:
name: my-testrun
spec:
cleanup: post
parallelism: 2
script:
configMap:
name: my-test
file: test.js
runner:
image: k6-extended:local
env:
- name: K6_PROMETHEUS_RW_SERVER_URL
value: http://prometheus/api/v1/write


And we run the test:


kubectl delete configmap my-test --ignore-not-found=true
kubectl create configmap my-test --from-file ./test.js
kubectl apply -f ./k8s_testrun.yml



The results can be viewed in grafana using a predefined dashboard.




Monday, October 14, 2024

Split Load Using NATS Partitions



 

In this post we will review NATS partitioning and how to use it to split load among multiple pods.

We've already reviewed the steps to setup a NATS cluster in kubernetes in this post. As part of the NATS statefulset template we have a config map, which is mounted into the NATS server container.



- name: nats
args:
- --config
- /etc/nats-config/nats.conf


volumeMounts:
- mountPath: /etc/nats-config
name: config


volumes:
- configMap:
name: nats-config



The ConfigMap is as follows:


apiVersion: v1
kind: ConfigMap
metadata:
name: nats-config
data:
nats.conf: |
{
"cluster": {
"name": "nats",
"no_advertise": true,
"port": 6222,
"routes": [
"nats://nats-0.nats-headless:6222",
"nats://nats-1.nats-headless:6222"
]
},
"http_port": 8222,
"lame_duck_duration": "30s",
"lame_duck_grace_period": "10s",
"pid_file": "/var/run/nats/nats.pid",
"port": 4222,
"server_name": $SERVER_NAME,
"mappings": {
"application.*": "application.{{partition(3,1)}}.{{wildcard(1)}}"
}
}



The NATS partitioning is configured by the mapping section in the config. In this case the producer publish messages to the NATS queue "application-<APPLICATION_ID>".

We want to split the load among 3 pods, and hence we configure the following:

"application.*": "application.{{partition(3,1)}}.{{wildcard(1)}}"


This provides the following instruction to the NATS:

Take the value used in the first wild card "application.*" , and use it to split into 3 partitions 0,1,2. Then publish the message in the following queue "application.<PARTITION_ID>.<APPLICATION ID>".


Now we can have 3 pods, each subsribing to the prefix: "application.<PARTITION_ID>.*", and the messages are split accordingly. Notice that a specific APPLICATION ID is always assigned to the same PARTITION_ID.

Also notice that this is a static assignment, regardless of the load on each application. In case we have hightly unbalanced applications load, this might not be a suitable method.