Tuesday, September 10, 2019

Using ingress for bare metal kubernetes cluster

You have a bare metal kubernetes cluster, and you want to enable access to it from the outer world.
So far, living within the kubernetes cluster, everything was simple. When you want to expose a new microservice, you configure a kubernetes service, and that's it.

But now you want your actual customers to access the various services. How can you do it?

You can configure the services to use nodePort, and then each service is accessible on its own port, but that not user friendly. The users usually prefer using FQDN instead of IP:port syntax.

Also, in my case, I've had a module, aka the fetcher-module, running both in the kubernetes cluster as part of one of the deployments, and running out of the kubernetes as part of an external process. This module would then access multiple services using the names service-a and service-b.




So I wanted to fetcher-module to use the same service names: service-a and service-b regardless whether it is within the kubernetes cluster, or out of the kubernetes cluster.

To do this, I've created an ingress object in kubernetes:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: my-ingress
spec:
  rules:
  - host: service-a
    http:
      paths:
      - backend:
          serviceName: service-a
          servicePort: 80
  - host: service-b
    http:
      paths:
      - backend:
          serviceName: service-b
          servicePort: 80

But, kubernetes does not provide the actual ingress implementation.
You need to install a ingress controller.
I've used NGINX controller, which can be installed using helm:

helm install stable/nginx-ingress \
  --name nginx-ingress \
  --namespace ingress \
  --set controller.kind=DaemonSet \
  --set controller.daemonset.useHostPort=true \
  --set controller.service.enabled=false \
  --set controller.hostNetwork=true

Notice that I've configured the NGINX controller to use the host network.
This means that it runs as a DaemonSet on each kubernetes node, listening on port 80 and 443, so you don't have any other process using these ports on the nodes.

Also, the services can be used in a transparent in/out of the kubernetes cluster only if they are configured on the same ports as the NGINX controller. In this case I've used port 80.

One last issue it the name resolution. In my case, I've had to add the service-a and service-b to the /etc/hosts on the machine out of the kubernetes cluster. A better solution is to add these to a DNS server. Notice that this means that the service-a name points to a single node IP, and it is a single point of failure. In my case this was a reasonable limitation. If this is not the case for you, consider using HAProxy in front of the NGINX controller.


Additional resources:

  1. Kubernetes ingress 
  2. NGINX controller
  3. HAProxy

No comments:

Post a Comment