Full Blog TOC

Full Blog Table Of Content with Keywords Available HERE

Wednesday, July 29, 2020

How to Execute a Command on a Kubernetes Pod using client-go



In this post we will present a short example of executing a command on a kubernetes pod using the kubernetes client-go library. The client-go library is great for usage, but specifically for the exec command, it is not strait-forward, so I believe an example is a good start for anyone who wishes to use the exec command.

To use the client-go library, wee first need to create a client. This is explained in more details in the post Access Kubernetes API from GoLang.


type K8s struct {
client *kubernetes.Clientset restConfig *rest.Config }

func Produce(insideK8sCluster bool) *K8s {
var err error var restConfig *rest.Config if insideK8sCluster {
restConfig, err = rest.InClusterConfig()
if err != nil {
panic(err)
}
} else {
configPath := filepath.Join(os.Getenv("HOME"), ".kube", "config")
restConfig, err = clientcmd.BuildConfigFromFlags("", configPath)
if err != nil {
panic(err)
}
}

client, err := kubernetes.NewForConfig(restConfig)
if err != nil {
panic(err)
}

return &K8s{
client: client,
restConfig: restConfig,
}
}


Once we have the client ready, we can use it to execute the command.
Notice that the command execution is actually implemented as a streaming HTTP request.


func (k *K8s) Exec(namespace string, pod string, container string, command []string) string {
attachOptions := &k8sApiCore.PodExecOptions{
Stdin: false,
Stdout: true,
Stderr: true,
TTY: false,
Container: container,
Command: command,
}

request := k.client.CoreV1().RESTClient().Post().
Resource("pods").
Name(pod).
Namespace(namespace).
SubResource("exec").
VersionedParams(attachOptions, scheme.ParameterCodec)

stdout := new(bytes.Buffer)
stderr := new(bytes.Buffer)
streamOptions := remotecommand.StreamOptions{
Stdout: stdout,
Stderr: stderr,
}

exec, err := remotecommand.NewSPDYExecutor(k.restConfig, "POST", request.URL())
if err != nil {
panic(err)
}

err = exec.Stream(streamOptions)
if err != nil {
result := strings.TrimSpace(stdout.String()) + "\n" + strings.TrimSpace(stderr.String())
result = strings.TrimSpace(result)
panic(err)
}

result := strings.TrimSpace(stdout.String()) + "\n" + strings.TrimSpace(stderr.String())
result = strings.TrimSpace(result)
return result}


Final Notes

It is not clear why did the kubernetes go-client group did not create a simplified version of the exec, as it is commonly used, but anyways, you can get your need here.

BTW: 
Make sure to have the same dependency version for the k8s API and the k8s client-go in your go.mod file. Otherwise, you will get compilation and/or runtime errors.


module k8s
go 1.14
require (
k8s.io/api v0.18.6 k8s.io/client-go v0.18.6 )

Tuesday, July 21, 2020

Client Fingerprint using fingerprintjs2




This post we will review usage of the fingerprintjs2 library.

This is a javascript library running on the client browser and intended to provide a unique ID for the client. We want the unique ID to identify the client across multiple HTTP sessions. This way we can collect information about the client regardless of a user authentication.

Notice that we cannot use source IP to identify a client due to several reasons:
  • The source IP might change
  • Several clients might use the same source IP, for example in a NAT network topology


The fingerprintjs2 library documentation for the integration seems straightforward:


if (window.requestIdleCallback) {
    requestIdleCallback(function () {
        Fingerprint2.get(function (components) {
          console.log(components) // an array of components: {key: ..., value: ...}
        })
    })
} else {
    setTimeout(function () {
        Fingerprint2.get(function (components) {
          console.log(components) // an array of components: {key: ..., value: ...}
        })  
    }, 500)
}


But, we should ask ourselves, what exactly are we doing: what means are used to create the unique identification of the client. 

We should use the means to allow us to get our purpose: a consistent per client unique ID.
This is importance to understand.
On the one hand, we want to use as many means to create a unique identification, so that we will not have 2 clients getting the same fingerprint.
On the other hand, we want the client identification to be consistent, and not to get updated too frequently.

Checking the fingerprintjs2 library documentation, we can find the list of stable means (components).
So we can select only the stable means, but that's not enough.

For example, do we want to get a different fingerprint on an event of timezone daylight setting change?
Probably not.

Do we want to get a different fingerprint on an event of a client browser update? Note that Chrome and FireFox are updated very often, so we'll probably do want to count on the browser full version.


So, the final configuration we have used is to exclude some of the components:


const options = {
excludes: {
audio: true,
enumerateDevices: true,
doNotTrack: true,
userAgent: true,
timezoneOffset: true,
},
}
fingerPrint.get(options, async (components) => {

This should provide us the target goal: a consistent per client unique ID.




Monday, July 20, 2020

Questions for a Python interview



In this post I have collected few questions for a Python based interview.

The goal of these questions is not to check whether the candidate knows the bits and bytes of Python (or of any language). 

Instead, the purpose is to find out the ability of the candidate to understand general programming issues that are related to most programming languages, using the Python language as a tool to express her/his ideas.

The related questions and answers are listed below.


Tuple vs. List


Question: 

What are the differences between Tuple and List? 
Provide an example of when should we use each of them.

Answer:

1.
A list is a mutable object (it can be modified).
A tuple is an immutable object (it cannot be modified), but the variable can be assigned a new tuple.

2.
A tuple memory footprint is slightly lower than a list memory footprint.

3.
Unlike a list, the tuple provides __hash()__, and hence can be using as a key in a dictionary.

4.
We use a tuple to describe an object properties, such as an employee details, and we want to emphasize that these properties cannot be modified. 
This is similar to dictionaries, bur without keys, and using only values.
For example:
('Joe Doe', 'accounting', 75000)


List Comprehension


Question: 

1.
Write a function that given a list of words (strings), uses a list comprehension to sum the lengths of all of the words starting with the character 'a'.
For example: 

print(a_len(['a', 'b', 'c', 'another', '', 'all']))

would print 11.

2.
What if you have a static very long list of words, and the function is called millions of times using a different prefix required instead of the 'a' prefix. How can you change the implementation?

Answer:

1.
The function is:

def a_len(words):
return sum([len(w) for w in words if w.startswith('a')])


2.
Assuming that we can do some pre-process work, we can create a trie from the list, for example using a depth of 5 characters, then, each leaf of the trie would hold the relevant list of words after the leaf prefix filtering, and hence the final work would be to filter and sum a relatively smaller amount of words.

This required a lot of memory. In case this is a problem, we can use files to represent the trie, or use a cluster based database, such as Redis, that can span across multiple servers.


Generators


Question: 

What is a generator, and what is the benefit of using a generator over using a list?

Answer:

A generator is a function that behaves like an iterator.
It uses the yield keyword to produce the next iteration element.
Using a generator is better than a list as it does not allocate the entire list elements in the memory, but instead produces a single element every time. 
This allows a streaming process implementation for the iteration, and reduces memory footprint.


Unit Test


Question: 

How can we write a unit test in python?
What is the difference between a unit test and a system test?

Answer:

A unit test is a set of functions that check the production python code.
The unit test can be based on the assert keyword, and on proprietary implementation to validate the code. In addition, a tests framework (such as unittest, nose, or pytest) is used to run the tests.

A unit test should not depend on external resources, such as databases, other microservices, for its operation. It should use mocks to simulate APIs with any external resources.

A unit test should target a validation of a small piece of the code, such as one function, one class or one file.

Unlike a unit test, the system test should test the entire product, end to end.
It will use real databases, and external entities.
It will address the validation method from the user point/requirement of view, and not from the code/module point of view



Wednesday, July 15, 2020

Using GOTRACEBACK to Control the Panic Stack Trace





When a GO application is crashing due to a critical error, a stack trace of the panic producing GO routine is printed.
For example, the following application produces a SIGSEGV signal (memory access violation).


func main() {
go func() {
for {
fmt.Println("go routine")
time.Sleep(time.Second)
var i *int
*i=0
}
}()

for {
fmt.Println("main func")
time.Sleep(time.Second)
}
}


Running this code displays the following output:


main func
go routine
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x48d294]

goroutine 18 [running]:
main.main.func1()
        /home/alon/git/panic/main.go:14 +0x84
created by main.main
        /home/alon/git/panic/main.go:9 +0x39


We can see the stack trace of the GO routine to blame.

But what about other GO routines?

To view more details, we need to use the GOTRACEBACK environment variable.
When the GOTRACEBACK is not set, it is defaulted to the value single.
This means we get the stack trace of the current running GO routine.

To get the stack trace of all of the application GO routine, we need to set the GOTRACEBACK environment variable to the value all. In such a case, the output of the application panic is the following:



main func
go routine
main func
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x48d294]

goroutine 18 [running]:
main.main.func1()
        /home/alon/git/panic/main.go:14 +0x84
created by main.main
        /home/alon/git/panic/main.go:9 +0x39

goroutine 1 [sleep]:
runtime.goparkunlock(...)
        /usr/local/go/src/runtime/proc.go:310
time.Sleep(0x3b9aca00)
        /usr/local/go/src/runtime/time.go:105 +0x157
main.main()
        /home/alon/git/panic/main.go:20 +0x9e


Now we can see the two GO routines.


Other possible values for the GOTRACEBACK environment variable are:
  • none - do not print any stack traces
  • system - print all the application GO routines stack traces as well as the GO runtime system GO routines stack traces
  • crash - same as system, but instead of exit with error code 2, triggers a core dump


Final Notes


I prefer changing the default of the GOTRACEBACK environment variable to all. In most cases, upon an application dump, you would need more information than jump the running GO routine, especially in a complex system.


Wednesday, July 8, 2020

Analyze Redis Memory usage




We have used Redis as our persistence layer, and were very pleased from its performance.
Everything went fine for several months, until we've found that the production Redis memory consumption go out of control. We are using many GB of RAM on each one of the Redis cluster nodes.

How can we analyze the problem?

We have so many keys, for various components in the system.
How much RAM is used by each component?
What is the RAM distribution among each type of Redis key?

Gladly, I have found that Redis can assist us.
It had supplied the MEMORY USAGE command, which print the memory bytes used by each key.

BUT... It can only run for a single key, and we have millions of keys.

This is when I've decided to create a small GO application to analyze the memory distribution among the various keys. 

The general idea is that each component has different prefixes for its keys, for example:
  • BOOK-book-name
  • AUTHOR-author-name
  • CUSTOMER-customer-id
So we can group the memory usage per each key prefix.
Here is the code that handles this.



package main

import (
"fmt"
"github.com/go-redis/redis/v7"
"log"
"strings"
)

type Memory struct {
Key string
Size uint
}

type Stats struct {
count uint
min uint
max uint
sum uint
avg uint
}

var client *redis.Client

func main() {
client = redis.NewClient(&redis.Options{
Addr: "127.0.0.1:6379",
Password: "",
DB: 0,
})

memories := getMemories()

prefixes := make(map[string]Stats)
prefixes["BOOK"] = Stats{}
prefixes["AUTHOR"] = Stats{}
prefixes["CUSTOMER"] = Stats{}

for _, memory := range memories {
updateStats(prefixes, memory)
}

fmt.Printf("\n\nsummary\n\n")
fmt.Printf("prefix,totalKeys,totalSize,avgKeySize,minKeySize,maxKeySize\n")
for prefix, stats := range prefixes {
if stats.count > 0 {
stats.avg = stats.sum / stats.count
fmt.Printf("%v,%+v,%+v,%+v,%+v,%+v\n", prefix, stats.count, stats.sum, stats.avg, stats.min, stats.max)
}
}
}

func getMemories() []Memory {
cmd := client.Keys("*")
err := cmd.Err()
if err != nil {
log.Fatal(err)
}

keys, err := cmd.Result()
if err != nil {
log.Fatal(err)
}

var memories []Memory
for _, key := range keys {
cmd := client.MemoryUsage(key)
err := cmd.Err()
if err != nil {
log.Fatal(err)
}

value, err := cmd.Result()
if err != nil {
log.Fatal(err)
}
memory := Memory{
Key: key,
Size: uint(value),
}
memories = append(memories, memory)
}
return memories
}

func updateStats(prefixes map[string]Stats, memory Memory) {
for prefix, stats := range prefixes {
if strings.HasPrefix(memory.Key, prefix) {
updateByMemory(&stats, memory)
prefixes[prefix] = stats
return
}
}
stats := Stats{}
updateByMemory(&stats, memory)
prefixes[memory.Key] = stats
}

func updateByMemory(stats *Stats, memory Memory) {
stats.count++
if stats.max < memory.Size {
stats.max = memory.Size
}
if stats.min == 0 || stats.min > memory.Size {
stats.min = memory.Size
}
stats.sum += memory.Size
}


The output of this small application is a CSV file:




And can be displayed of course as a chart:





Final Notes


While working on this, I've found what seems to be a bug on Redis.
For more details, check this question at stackoverflow, and the final result is that it was merged into the go-redis library in this pull request.

Saturday, July 4, 2020

Scale your application using HPA on GKE



In this post we will review the required steps to automatically scale an application installed on Google Kubernetes Engine (GKE). We will use a combination of several functionalities: 

Metrics-Server


First we need to supply the CPU/memory metrics per pod. This can be done using the metrics-server.
The metric server is:


"
...a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.
"


To apply the metrics server we use the following command:


kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml


After a few minutes, the metrics server has collected the statistics for our pods, and we can use the following command to check the pods metrics:


kubectl top pod


HPA


In the kubernetes documentation we find that HPA:


"
...scales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization
"

Hence, to use HPA, we start by configuring the resources requests in our deployment.
Notice that HPA uses the resources requests, and not the resources limits.

For example, in our deployment, we specify:


spec:
containers:
- name: c1
image: my-image
resources:
requests:
cpu: "0.5"


Next, we create the HPA. 
The HPA configures the min and max replicas for the deployment.
It also configures the target averaged CPU usage based on the CPU usage on all of the running pods.


apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: my-autoscaler
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deployment
maxReplicas: 10
minReplicas: 2
targetCPUUtilizationPercentage: 80


The HPA will create new pods according to the load on the system, but at some stage, there might be too many pods to run on the kubernetes nodes, as the nodes have limited resources, and hence the pods will remain in a pending state. This is where GKE part completes the puzzle.


GKE Cluster Autoscaler


We use GKE Cluster Autoscaler to allocate new kubernetes cluster nodes upon need:

"
...resizes the number of nodes in a given node pool, based on the demands of your workloads
"

So once we have pods in a pending state due to insufficient CPU/memory resources, the GKE Cluster Autoscaler adds new nodes.

To configure the GKE Cluster Autoscaler, we update the node pool configuration:


gcloud container clusters update my-k8s-cluster \
    --enable-autoscaling \
    --min-nodes 3 \
    --max-nodes 10 \
    --zone my-zone \
    --node-pool my-pool


Final Notes


In this post we have reviewed the steps to handle kubernetes autoscaling.

Note that you can also configure HPA to use not just memory and CPU, but also custom metrics to scale your application.

Also, we need to use pod anti-affinity rules to avoid all pods from starting on the same node.














Wednesday, July 1, 2020

Using SSL based Ingress on Google Kubernetes Engine




In this post we will review the steps to create a SSL based Ingress on Google Kubernetes Engine (GKE).

An Ingress is an object that exposes external access to services in the kubernetes cluster. 

For example we have 2 services: foo-service and bar-service.
We want to expose both of them to the internet, but we want to use a single SSL certificate.





Follow the next steps to create SSL based encryption.
  1. Make sure that all of the services (foo and bar) implement readiness probes.

  2. Add annotation to all of the services to enable routing from the ingress directly to the pod (better performance):


    apiVersion: v1
    kind: Service
    metadata:
    name: foo-service
    annotations:
    cloud.google.com/neg: '{"ingress": true}'


  3. Create a static IP that will be used for the ingress:

    gcloud compute addresses create my-ip --global
    


  4. Check that the static IP is configured:

    gcloud compute addresses describe my-ip --global
    


  5. Create a SSL certification.

    You can either buy a public signed SSL certificate, or create your own self-signed SSL certificate. The self-signed can be used to testing purpose, but in a real world scenario, you would probably need a public signed SSL certificate.
    To create a self signed certification, use the following commands:

    
    rm -rf keys
    mkdir keys
    openssl genrsa -out keys/ingress.key 2048
    openssl req -new -key keys/ingress.key -out keys/ingress.csr -subj "/CN=radwarebouncer.com"
    openssl x509 -req -days 365 -in keys/ingress.csr -signkey keys/ingress.key -out keys/ingress.crt
    kubectl create secret tls bouncer-ingress-secret --cert keys/ingress.crt --key keys/ingress.key
    


  6. Create Ingress. 

    Notice that:
    - The ingress includes annotation to use the my-ip static IP.
    - The ingress includes a specification to use the ingress-secret.

    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
    name: ingress
    annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    kubernetes.io/ingress.global-static-ip-name: "my-ip"
    spec:
    tls:
    - secretName: ingress-secret
    rules:
    - host: example.com
    http:
    paths:
    - path: /foo
    backend:
    serviceName: foo-service
    servicePort: 80
    - path: /bar
    backend:
    serviceName: foo-service
    servicePort: 80

That's it, you are ready to go.
Use curl for your domain name to check the ingress, for example:

curl -k https://example.com/foo








Delete images from a Private Docker Registry




In case you have a private docker, and your builds keep pushing images to the registry, you will eventually run out of space.

To remove old images, you can use the following script.
The script scan all images, and removes any image whose tag is not "latest", and its tag > 950.

Change the skip conditions to match your images removal requirements.


#!/usr/bin/env bash


CheckTag(){
Name=$1
Tag=$2

Skip=0
if [[ "${Tag}" == "latest" ]]; then
Skip=1
fi
if [[ "${Tag}" -ge "950" ]]; then
Skip=1
fi
if [[ "${Skip}" == "1" ]]; then
echo "skip ${Name} ${Tag}"
else
echo "delete ${Name} ${Tag}"
Sha=$(curl -v -s -H "Accept: application/vnd.docker.distribution.manifest.v2+json" -X GET http://127.0.0.1:5000/v2/${Name}/manifests/${Tag} 2>&1 | grep Docker-Content-Digest | awk '{print ($3)}')
Sha="${Sha/$'\r'/}"
curl -H "Accept: application/vnd.docker.distribution.manifest.v2+json" -X DELETE "http://127.0.0.1:5000/v2/${Name}/manifests/${Sha}"
return
fi
}

ScanRepository(){
Name=$1
echo "Repository ${Name}"
curl -s http://127.0.0.1:5000/v2/${Name}/tags/list | jq '.tags[]' |
while IFS=$"\n" read -r line; do
line="${line%\"}"
line="${line#\"}"
CheckTag $Name $line
done
}


JqPath=$(which jq)
if [[ "x${JqPath}" == "x" ]]; then
echo "Couldn't find jq executable."
exit 2
fi

curl -s http://127.0.0.1:5000/v2/_catalog?n=10000 | jq '.repositories[]' |
while IFS=$"\n" read -r line; do
line="${line%\"}"
line="${line#\"}"
ScanRepository $line
done




Once the cleanup is done, run the docker garbage collector on the docker registry container:


docker exec -it docker-registry bin/registry garbage-collect /etc/docker/registry/config.yml


This would run for several minutes, and then would delete the images from the disk.