Full Blog TOC

Full Blog Table Of Content with Keywords Available HERE

Monday, March 25, 2024

Using KEDA with Promethues scaler


 

In this post we will show a simple usage of KEDA with prometheus scaler to set the replicas number of a deployment.


KEDA is an event driven autoscaler, it wraps up the kubernetes horizontal pod autoscaler and simplify autoscaling while enabling scaler by a huge collection of scale metrics sources.


Deploy KEDA

Deploying KEDA is done by using helm chart:

helm repo add kedacore https://kedacore.github.io/charts
helm repo update
helm install keda kedacore/keda --namespace keda --create-namespace

Deploy a Scaledobject

A scaledobject is a configuration of the required auto scaling. We deploy the following scaledobject:


apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: my-scaledobject
namespace: default
spec:
minReplicaCount: 1
maxReplicaCount: 5
pollingInterval: 5
cooldownPeriod: 30
scaleTargetRef:
name: my-deployment
kind: Deployment
apiVersion: apps/v1
triggers:
- type: prometheus
metadata:
serverAddress: http://prometheus-service.default:80
threshold: '100'
query: sum (rate(handled_items[1m]))


We specify the min and max replicas, as well as the polling interval, and the cooldown internal.

The scaled object target is a deployment.

The metric source is prometheus, whose service address must be supplied. The scale is done using a promethues metric value. In this example, we set the threshold to 100, which means that above an average of 100 per pod, KEDA will scale up the replicas amount.





Monday, March 18, 2024

Creating graphs using Graphvis dot file

 

Graphvis is an open source graph visualization software. One of the covered aspects is a standard for DOT file, which describes a graph. These graphs can be later visualized using related software and also in online visualization sites such as Graphvis Online.

In this post we will explore the dot file various capabilities.

For more amazing graphs, see the gallery.

A simple undirected graph


graph test {
a -- b -- c;
}




A simple directed graph

digraph test {
a -> b -> c;
}


Multiple graphs with styles


  • To have a sub-graph in its own box: use the prefix "cluster" in the graph name.
  • Edge connections:
    • node to node edges
    • cluster to cluster edges
  • Use "node" to apply attributes to all nodes in the scope

digraph {

compound=true;

subgraph cluster_a{
label="A";
node [style=filled fillcolor="#ff00ff" fontcolor="white" shape=box];
a1[shape=star];
a1 -> {a2 a3};
}

subgraph cluster_b{
label="B";
b1 -> b2;
}

a1 -> b1[label="nodes edge"];
a2 -> b2[label="clusters edge" ltail="cluster_a" lhead="cluster_b"];
}




Record node

  • Enables multiple segments in a node.
  • We can tag a segment, and use it later in an edge.
digraph {

subgraph x{
node[shape="record"];
a[label="{line 1|<x>line2}"];
b[label="{line1|{lin2_col1|<y>line2_col2|line2_col3}}"];
a:x->b:y;
}

}





Use HTML in a label

We can use HTML for a label, but only in a table format.

digraph {

staging [
label=<<table border="0" cellborder="1" cellspacing="0" cellpadding="4">
<tr> <td> <b>important</b></td> </tr>
<tr> <td> to have fun</td> </tr>
</table>>
shape=plain
]

}








Monday, March 11, 2024

Dynamically Allocate and Delete PersistentVolumeClaim for a CronJob


 


In this post we will review the steps to dynamically create and delete a PersistentVolumeClaim for a CronJob.

A CronJob might require a large amount of temporary storage, and we don't want to keep the PersistenceVolumeClaim active while the job is not running, since the cost might be high. For example, assumeing we have a CronJob running once in a week for 3 hours, and required 1TB disk space for calculations. The cost of leaving such a disk active for an entire week is very high, hence we should dynamically allocate and remove the disk.

Kubernetes does not supply out of the box mechanism to handle this, hence we can do it ourselves. We handle this by 3 CronJobs:

1. The allocate CronJob which create the PVC

2. The actual computation CronJob

3. The cleanup CronJob


The Allocate CronJob

This CronJob creates the PVC before the computation job is starting. It should use a schedule that run it just a few minutes before the computation job. The following are the kubernetes entities to run the allocate CronJob. Notice that we must also provide permissions for handling the PVC.

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: allocate-role
rules:
- apiGroups: [ "" ]
resources: [ "persistentvolumes" ]
verbs: ["create","list","delete","get","patch","watch"]

- apiGroups: [ "" ]
resources: [ "persistentvolumeclaims" ]
verbs: ["create","list","delete","get","patch","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: allocate-role-binding
subjects:
- kind: ServiceAccount
name: allocate-service-account
namespace: default
roleRef:
kind: ClusterRole
name: allocate-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: allocate-service-account
namespace: default
---
apiVersion: v1
kind: ConfigMap
metadata:
name: allocate-config
data:
pvc.yaml: |-


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
labels:
type: local
spec:
storageClassName: "gp2"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1000Gi

---
kind: CronJob
metadata:
name: allocate-cronjob
spec:
schedule: "0 0 * * *"
startingDeadlineSeconds: 36000
concurrencyPolicy: Replace
timeZone: "Etc/UTC"
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
serviceAccountName: allocate-service-account
restartPolicy: Never
containers:
- name: cleanup
image: repo/allocate/dev:latest
imagePullPolicy: IfNotPresent
env:
- name: PVC_NAME
value: my-pvc
- name: NAMESPACE
value: default
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: allocate-config

The allocate image is a simple script that runs kubectl to create the PVC:

#!/usr/bin/env bash
set -e
set -x

echo "prepare starting"

kubectl delete pvc ${PV_NAME} --namespace ${NAMESPACE} --ignore-not-found=true
kubectl apply -f /config/pvc.yaml

echo "prepare done"




The Cleanup CronJob


The Cleanup CronJob runs after the computation job completion and deletes the PVC. This includes the following kubernetes entities:


---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cleanup-role
rules:
- apiGroups: [ "" ]
resources: [ "persistentvolumeclaims" ]
verbs: ["create","list","delete","get","patch","watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cleanup-role-binding
subjects:
- kind: ServiceAccount
name: cleanup-service-account
namespace: default
roleRef:
kind: ClusterRole
name: cleanup-role
apiGroup: rbac.authorization.k8s.io

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cleanup-service-account
namespace: default

---
apiVersion: batch/v1
kind: CronJob
metadata:
name: cleanup-cronjob
spec:
schedule: "0 4 * * *"
startingDeadlineSeconds: 36000
concurrencyPolicy: Replace
timeZone: "Etc/UTC"
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
serviceAccountName: cleanup-service-account
restartPolicy: Never
containers:
- name: cleanup
image: repo/cleanup/dev:latest
imagePullPolicy: IfNotPresent
env:
- name: PV_NAME
value: attackrepo-pv-0
- name: NAMESPACE
value: default


The cleaner image runs the following script:


#!/usr/bin/env bash
set -e
set -x

echo "cleanup starting"

kubectl delete pvc ${PV_NAME} --namespace ${NAMESPACE} --ignore-not-found=true


echo "cleanup done"