In this post we will deploy minimal RabbitMQ cluster on kubernetes.
RabbitMQ is a lightweight message broker widely used in the industry.
I have used it as an RPC server in our product, but you can use it for many other patterns, such as: simple queue, work queues, publish subscribe.
In this post, we will not use persistence for the RabbitMQ, but in case of need, you can add it using kubernetes Persistence Volumes.
To deploy RabbitMQ there are 2 steps:
- Prepare a RabbitMQ image
- Create kubernetes resources
Prepare a RabbitMQ Image
The image is based on the following Dockerfile:
FROM rabbitmq:3.8.3 COPY files / ENTRYPOINT /entrypoint.sh
As you can notice, we're only enriching the base RabbitMQ image with several additional files from the files sub folder.
The added files include the entrypoint.sh, which is, as it named implies, the docker image entry point.
entrypoint.sh:
#!/usr/bin/env bash echo myClusterPassword > /var/lib/rabbitmq/.erlang.cookie chmod 700 /var/lib/rabbitmq/.erlang.cookie /init.sh & exec rabbitmq-server
The entry point creates the erlang cookie, which is used for cluster intercommunication.
Then, it runs the init script in the background, and the runs the RabbitMQ server.
The init.sh script purpose is to configure the RabbitMQ once it is started.
init.sh:
The init script has the following logic:
While we could directly call the the status command from the kubernetes StatefulSet, In most cases you would find yourself adding additional logic to the probe, hence, we are using a dedicated script.
Then, it runs the init script in the background, and the runs the RabbitMQ server.
The init.sh script purpose is to configure the RabbitMQ once it is started.
init.sh:
#!/usr/bin/env bash
until rabbitmqctl --erlang-cookie myClusterPassword node_health_check > /tmp/rabbit_health_check 2>&1
do
sleep 1
done
rabbitmqctl --erlang-cookie myClusterPassword set_policy ha-all "" '{"ha-mode":"all", "ha-sync-mode": "automatic"}'
rabbitmqctl add_user myUser myPassword
rabbitmqctl set_user_tags myUser administrator
rabbitmqctl set_permissions -p / myUser ".*" ".*" ".*"
The init script has the following logic:
- Wait for the RabbitMQ to start
- Configure auto sync of all queues in the cluster
- Add myUser as an administrator
An addition file is added: probe.sh. This will be later used for kubernetes probes.
probe.sh
#!/usr/bin/env bash rabbitmqctl status
While we could directly call the the status command from the kubernetes StatefulSet, In most cases you would find yourself adding additional logic to the probe, hence, we are using a dedicated script.
Create Kubernetes Resources
To deploy RabbitMQ on kubernetes, we will use the RabbitMQ kubernetes plugin. This plugin requires permissions to list the endpoints of the RabbitMQ service.
We will start by configuration service account with the required permissions.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: rabbit-role
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: rabbit-role-binding
subjects:
- kind: ServiceAccount
name: rabbit-service-account
namespace: default
roleRef:
kind: ClusterRole
name: rabbit-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: rabbit-service-account
namespace: default
Next we'll create a ConfigMap with the RabbitMQ configuration files: the rabbitmq.conf, and the enabled_plugins.
Notice the RabbitMQ configuration file includes the name of the discovery service, which we will create next.
apiVersion: v1
kind: ConfigMap
metadata:
name: rabbit-config
data:
rabbitmq.conf: |-
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
cluster_formation.k8s.address_type = hostname
cluster_formation.k8s.hostname_suffix = .rabbit-discovery-service.default.svc.cluster.local
cluster_formation.k8s.service_name = rabbit-discovery-service
cluster_formation.node_cleanup.interval = 10
cluster_formation.node_cleanup.only_log_warning = true
cluster_partition_handling = autoheal
queue_master_locator=min-masters
enabled_plugins: |-
[rabbitmq_management,rabbitmq_peer_discovery_k8s].
The RabbitMQ deployment requires two services.
The standard service is used to access the RabbitMQ instance.
In addition, we're adding a headless service - the discovery service. This service is used by RabbitMQ kubernetes plugin to find the instances.
---
apiVersion: v1
kind: Service
metadata:
name: rabbit-service
spec:
selector:
configid: rabbit-container
type: NodePort
ports:
- port: 80
targetPort: 5672
name: amqp
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: rabbit-discovery-service
spec:
selector:
configid: rabbit-container
clusterIP: None
publishNotReadyAddresses: true
ports:
- port: 5672
name: amqp
- port: 15672
name: http
The last resource is the StatefulSet. We will create 3 pods for the service, to ensure a minimal valid cluster quorum.
Notice that the StatefulSet includes a liveness and a readiness probes pointing the the probe.sh script that we've previously created.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbit-statefulset
spec:
serviceName: rabbit-discovery-service
replicas: 3
selector:
matchLabels:
configid: rabbit-container
template:
metadata:
labels:
configid: rabbit-container
spec:
serviceAccountName: rabbit-service-account
terminationGracePeriodSeconds: 10
containers:
- name: rabbit
image: replace-with-our-rabbit-image-name:latest
imagePullPolicy: IfNotPresent
env:
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: RABBITMQ_USE_LONGNAME
value: "true"
- name: RABBITMQ_NODENAME
value: "rabbit@$(HOSTNAME).rabbit-discovery-service.default.svc.cluster.local"
volumeMounts:
- name: rabbit-config
mountPath: /etc/rabbitmq/enabled_plugins
subPath: enabled_plugins
- name: rabbit-config
mountPath: /etc/rabbitmq/rabbitmq.conf
subPath: rabbitmq.conf
livenessProbe:
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
initialDelaySeconds: 15
periodSeconds: 10
exec:
command:
- /bin/sh
- -c
- /probe.sh
readinessProbe:
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 1
initialDelaySeconds: 15
periodSeconds: 10
exec:
command:
- /bin/sh
- -c
- /probe.sh
volumes:
- name: rabbit-config
configMap:
name: rabbit-config
Final Notes
In this post we've deployed a RabbitMQ cluster in kubernetes. You can connect to the running pods, and execute management commands, such as:
kubectl exec -it rabbit-statefulset-0 rabbitmqctl list_queues
That's all for this post.
Liked it? Leave a comment.
kubectl exec -it rabbit-statefulset-0 rabbitmqctl list_queues
That's all for this post.
Liked it? Leave a comment.

Hello! Thanks for the post!
ReplyDeleteBut I'm trying to follow it and I receive the log below.
"ERROR: epmd error for host rabbit-statefulset-0.rabbit-discovery-service.default.svc.cluster.local: nxdomain (non-existing domain)"
Hi,
DeleteIn the statefulset, we have the following environment variable:
- name: RABBITMQ_NODENAME
value: "rabbit@$(HOSTNAME).rabbit-discovery-service.default.svc.cluster.local"
which means that this is the host name.
When installing on the default kubernetes namespace, the default suffix for the services and hosts is "default.svc.cluster.local".
Maybe you are not installing on the default namespace?
Try finding your DNS configuration suffix.
See this link: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
e.g.: kubectl exec -ti dnsutils -- cat /etc/resolv.conf
This comment has been removed by a blog administrator.
ReplyDelete