In this post I will review the steps done to get things working:
- Install Google Cloud SDK
- Create a new kubernetes cluster on GKE
- Enable a local kubectl to access the kubernetes cluster on GKE
- Upload the images to Google Cloud container registry
- Adjust the kubernetes templates to use GKE's persistence disks
1. Install Google Cloud SDK
The first step is to install the gcloud CLI, which is the Google cloud SDK.
Google cloud SDK is required to create, and update the various GCP entities, for example: login to GCP, create a kubernetes cluster, configure docker to connect to the GCP registry, and much more.
Specifically, for Ubuntu, follow the instructions of: Install Google Cloud SDK using apt-get.
The summary of these instructions is below.
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list sudo apt-get install apt-transport-https ca-certificates gnupg curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - sudo apt-get update && sudo apt-get install google-cloud-sdk gcloud init
2. Create a new kubernetes cluster on GKE
To create a new kubernetes cluster, I've simply used the GCP web console.
- Login to the GCP web console using your personal user or your work related user.
- Click the Menu on the top left, and select: Kubernetes Engine, Clusters.
- Click on Create Cluster, and change any settings you need (I've used all of the defaults)
Later I've found that the cluster was configured to use 3 machines, that each has a single CPU.
For most applications, this is not enough, so I've updated the cluster to use 3 machines of 8 CPUs, using the gcloud SDK:
gcloud container node-pools create MY_NEW_POOL --cluster=MY_K8S_CLUSTER --num-nodes=3 --machine-type=n1-standard-8
3. Enable a local kubectl to access the kubernetes cluster on GKE
Now, our kubernetes cluster is ready, but how can we use the kubectl CLI to access it?
The first method which is simple, but less convenient in my option, is to use the kubectl from the GCP web console.
- Login to the GCP web console using your personal user or your work related user.
- Click the Menu on the top left, and select: Kubernetes Engine, Clusters.
- On the clusters table, click on the connect button on the right side of your cluster.
- That's it, you have a SSH session and a configured kubectl ready for your use
The second method requires some more steps, but in the long run, is easier to use. It is based on the Configuring cluster access for kubectl guide.
First, enable kubernetes API:
- Login to the GCP web console using your personal user or your work related user.
- Click the Menu on the top left, and select: API & services -> Enable API & services -> kubernetes API
Next, update the local kubectl configuration (at ~/.kube/config) using the gcloud CLI:
gcloud container clusters get-credentials MY_K8S_CLUSTER
As a side note, when working with multiple kubernetes cluster, you should be aware of the kubectl contexts.
kubectl context = kubernetes Cluster + kubernetes Namespace + kubernetes User
Use the following commands to list, view, and update the current kubectl context:
kubectl config get-contexts # display list of contexts kubectl config current-context # display the current-context kubectl config use-context my-cluster-name # set the default context to my-cluster-name
4. Upload the images to Google Cloud container registry
OK, your cluster is up and running, and you can access it. But how can you access your images?
If you already have a public accessible container registry, great! You can skip this step.
Otherwise, you can use GCP container registry.
First, enable the container registry API:
- Login to the GCP web console using your personal user or your work related user.
- Click the Menu on the top left, and select: API & services -> Enable API & services -> container registry API
Next, login to the machine where the docker images resides, and run the following:
gcloud auth login gcloud auth configure-docker
This enables your local docker to access the GCP container registry.
Finally, to upload a docker image, tag it using the GCP prefix, and push it:
IMAGE_FULL_ID=MY_IMAGES_FOLDER/MY_IMAGE_NAME GCR_TAG=gcr.io/MY_GCP_PROJECT_NAME/${IMAGE_FULL_ID} docker tag MY_LOCAL_REGISTRY_SERVER:MY_LOCAL_REGISTRY_PORT/${IMAGE_FULL_ID} ${GCR_TAG} docker push ${GCR_TAG}
5. Adjust the kubernetes templates to use GKE's persistence disks
In case the application has persistence volume claims, you should update it to use GCP's persistence instead,
This is done by dropping the storage class name from the persistence volume claims.
For example, remove the red line here:
volumeClaimTemplates: - metadata: name: persist-data spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "hostPath"
resources:
requests:
storage: "1g"
Summary
GCP is a great platform, allowing quick implementation and deployment of applications.
In this post we have reviewed move of a single application to GKE.
Once the application is located in GKE, it can also easily use additional GCP services, such as BigQuery, Pub/Sub, and AI.
No comments:
Post a Comment