In this post we can see how to install NATS cluster with configuration tuned for high throughput.
The following is an example script to install a tuned NATS cluster:
#!/usr/bin/env bash
set -e
cd "$(dirname "$0")"
helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm repo update
rm -f nats.yaml
cat <<EOF > nats.yaml
# nats-values.yaml
config:
jetstream:
enabled: false
cluster:
enabled: true
replicas: 4
merge:
# Bigger messages & buffers
max_payload: 8388608 # 8 * 1024 * 1024 bytes
write_deadline: "2s"
max_pending: 536870912 # 512 * 1024 * 1024 bytes
# Connection & subscription scaling
max_connections: 100000
max_subscriptions: 1000000
# Disable anything not needed
debug: false
trace: false
logtime: false
container:
merge:
resources:
limits:
cpu: "4"
memory: "2Gi"
EOF
helm delete nats --ignore-not-found
helm upgrade --install nats \
nats/nats \
--namespace default \
--create-namespace \
-f nats.yaml
rm -f nats.yaml
We are increasing the NATS cluster memory using the max_pending. This configure NATS to buffer messages allowing producer faster publish, and eventuall higher consumption by the consumer.
The producers and consumers actually connect to a random available NATS pod, so in case of small amount of consumers, producers, and NATS pods, we will probably have an unbalanced load for the NATS pods.
To avoid this, use a connection pool on both consumers and producers, hence spreading the load among the NATS pods.
Using messages size 1K, a single NATS pod can handle ~1M messages/seconds. Make sure checking the NATS pod is not saturated by CPU and memory.
No comments:
Post a Comment