Sunday, February 26, 2023

Go Docker Build with Internal Shared Package

 


In the post Go Shared Library, we have reviewed a method to use a shared Go library in a single git repository. This is a case where we have multiple Go libraries in a single git repository, and some share libraries that are used by the modules. A possible folders structure for this use case is:



In this example, we have 3 modules and 2 common libraries. 

The go.mod file for module-a, includes the replace section for the internal shared libraries.


module my.company.com/example/modulea

go 1.19

replace (
my.company.com/example/commonlib1 => ../common-lib-1
my.company.com/example/commonlib2 => ../common-lib-2
)

require (
my.company.com/example/commonlib1 v0.0.0-00010101000000-000000000000
my.company.com/example/commonlib2 v0.0.0-00010101000000-000000000000
)


To build a docker image for this module, we need to add the sources of all the requirements. We can do it manually for each of module-a, module-b, and module-c. A better approach is to automatically handle this. 

We present now a script to automatically build a docker image for a Go module, including the internal module source includes. This automation is done based on the parsing of the related module go.mod file.



#!/usr/bin/env bash

#-------------------------------------
# This script builds a single GO image
#-------------------------------------

set -e

DockerRegistry="${DockerRegistry:-my-project-snapshot-local}"
ProjectVersion="${ProjectVersion:-/dev:latest}"
ScriptsFolder=$(dirname "$0")
Project=$(basename ${PWD})
EntryPoint=\\/${Project}
ArtifactName="my-project-${FolderName}"
DockerTag="${DockerRegistry}/my-project-${Project}${ProjectVersion}"


AddKubectl=false
PushImage=false

HandleCommonPackage(){
commonPackage=$1
sourceFolder=$(echo "${commonPackage}" | cut -c 9-)

echo "common module ${sourceFolder}"

if [[ ! -d ./temp-commons-go-manifest/${sourceFolder} ]]; then
mkdir -p ./temp-commons-go-manifest/${sourceFolder}
cp ../${sourceFolder}/go.mod ./temp-commons-go-manifest/${sourceFolder}/go.mod
cp ../${sourceFolder}/go.sum ./temp-commons-go-manifest/${sourceFolder}/go.sum
fi

if [[ ! -d ./temp-commons-go-all/${sourceFolder} ]]; then
mkdir -p ./temp-commons-go-all/${sourceFolder}
sourceParent=$(dirname ./temp-commons-go-all/${sourceFolder})
cp -r ../${sourceFolder} ${sourceParent}
fi
}

ReplaceVariables(){
sed -i "s/___PROJECT___/${Project}/g" temp-Dockerfile
sed -i "s/___ENTRYPOINT___/${EntryPoint}/g" temp-Dockerfile

grep "=" ./src/go.mod | cut -d= -f2 | while read line ; do HandleCommonPackage "$line" ; done
}

CleanTemp(){
rm -f temp-Dockerfile
rm -rf ./temp-commons-go-manifest
rm -rf ./temp-commons-go-all
}

Build(){
CleanTemp
mkdir ./temp-commons-go-manifest
mkdir ./temp-commons-go-all

cat ${ScriptsFolder}/Dockerfile_stage1 >> temp-Dockerfile
ReplaceVariables
echo "COPY files /images/${Project}/files" >> temp-Dockerfile
echo "RUN go test -race -timeout 300s ./..." >> temp-Dockerfile

TagCompileStage="${DockerTag}-stage1"
docker build -t "${DockerTag}" -c ${TagCompileStage} --cache-from=${DockerTag} -f temp-Dockerfile

cat ${ScriptsFolder}/Dockerfile_stage2 >> temp-Dockerfile
ReplaceVariables

if [[ -d files ]]; then
echo "COPY files /" >> temp-Dockerfile
fi

docker build -t "${DockerTag}" -c ${TagCompileStage} --cache-from=${DockerTag} -f temp-Dockerfile

CleanTemp
}

Build


The script uses a two-stages build. These stages docker files templates located in Dockerfile_stage1, and Dockerfile_stage2. Notice that the script automatically replaces ___VARIABLE___ strings in the docker file.


Dockerfile_stage1

FROM golang:1.19.1 AS go-compiler

ENV GOPATH=/go GOBIN=/go/bin

# get dependencies
COPY ./src/go.mod /images/___PROJECT___/src/
COPY ./src/go.sum /images/___PROJECT___/src/
ADD ./temp-commons-go-manifest /images
WORKDIR /images/___PROJECT___/src

RUN go mod download

# compile source
ADD ./src /images/___PROJECT___/src
ADD ./temp-commons-go-all /images

RUN go build -o /output/___PROJECT___


Dockerfile_stage2

FROM ubuntu:20.04
RUN apt update
RUN apt install -y net-tools ca-certificates curl iputils-ping dnsutils
RUN update-ca-certificates

COPY --from=go-compiler /output/___PROJECT___ /___PROJECT___
WORKDIR /
ENTRYPOINT ["___ENTRYPOINT___"]



Final Note


Using this simple script, handling of the docker image creation is automatic and simple. It is a great tool to automate Go modules and Go internal libraries usage.




Monday, February 20, 2023

Create Storage Provisioner for AWS EKS


 


In this post we will review the steps to create a storage provisioner for AWS EKS. This is required to allocate EFS storage as a response to a Physical Volume Claim (PVC).

First we create a service account that will be used for the provisioner.


rm -f iam-policy.json

curl -S https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/v1.2.0/docs/iam-policy-example.json -o iam-policy.json

policyExists=$(aws iam list-policies|grep EFSCSIControllerIAMPolicy|wc -l)
if [[ "${policyExists}" = "0" ]]; then
aws iam create-policy \
--policy-name EFSCSIControllerIAMPolicy \
--policy-document file://iam-policy.json
fi

rm -f iam-policy.json


eksctl create iamserviceaccount \
--name=efs-csi-controller-sa \
--namespace=kube-system \
--cluster=${AWS_EKS_CLUSTER_NAME} \
--region ${AWS_REGION} \
--override-existing-serviceaccounts \
--attach-policy-arn=arn:aws:iam::${AWS_ACCOUNT}:policy/EFSCSIControllerIAMPolicy \
--approve


Next we install the provisioner using helm chart.


helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver
helm repo update
helm upgrade -i aws-efs-csi-driver aws-efs-csi-driver/aws-efs-csi-driver \
--namespace kube-system \
--set image.repository=602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/aws-efs-csi-driver \
--set controller.serviceAccount.create=false \
--set controller.serviceAccount.name=efs-csi-controller-sa


Now, we login to the AWS console, and manually create an EFS, and update the EFS ID in the following yaml file, and apply it using kubectl.


kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
parameters:
provisioningMode: efs-ap
fileSystemId: fs-084ad2344494c65a4
directoryPerms: "700"


Now any PVC with storage class efs-sc will be automatically handled by the storage provisioner.







Monday, February 13, 2023

Capture and view network traffic


 

In this post we will show the steps to capture and view network traffic on a local ubuntu machine. This is useful when we need to check a specific application ingress and egress traffic.

For this post we will use the "application" curl.


Capturing

We start by capturing the traffic using the tcpdump. The tcpdump creates a dump of all the network packets on a specific interface or for a specific host. For this example we will capture all traffic on a specific interface. To get the list of interfaces on the machine use the command:

ip a

and then run the tcpdump, for example:

sudo tcpdump -i enp0s31f6 -w dump.pcap

Notice that the tcpdump gets two arguments here: the interface name, and the pcap output file which includes the captured packets. The tcpdump will keep capturing traffic until we stop it using Ctrl+C.

We keep the tcpdump running, and in another terminal we run our application:

curl http://ynet.co.il


Now we can stop the capturing using Ctrl+C, and we have a pcap file with the captured network packets.


Viewing

To view the pcap file, the most common application is wireshark. To install wireshark, use the following command:

sudo apt install wireshark

And then run it:

sudo wireshark

In the wireshark application, select file and open the dump.pcap file.

This now displays all the network packets, but in most cases, we just want to see http packets, so in the wireshark display filter, type http, and click enter.



We can now see the http related packets, and inspect the ingress and egress traffic, divided by the related protocol layers.

Final Note

In this post we have captured and viewed http traffic for an application. Notice that HTTPS traffic will not be clear text as it is encrypted. To view decrypted traffic, use the steps described here

Tuesday, February 7, 2023

Export and Import Kibana Dashboards using Go




In this post we will review the steps to export and import kibana dashboards using Go code.

To handle these operation we will use a kibana config struct:


const configFolder = "config"

type KibanaConfig struct {
kibanaUrl string
elasticPassword string
}

func ProduceKibanaConfig(
kibanaUrl string,
elasticPassword string,
) *KibanaConfig {
return &KibanaConfig{
kibanaUrl: kibanaUrl,
elasticPassword: elasticPassword,
}
}


To export the dashboards, we send a request to list all dashboards, and then export them one by one.


func (k *KibanaConfig) ExportConfig() {
data := k.sendKibanaApiGet("/api/saved_objects/_find?type=dashboard")
var objects savedObjects
err := json.Unmarshal([]byte(data), &objects)
if err != nil {
panic(err)
}

for _, object := range objects.SavedObjects {
k.exportDashboard(object)
}
}


We keep methods to send GET and POST requests to kibana. The actual implementation is using Go net/http package, and is out of scope for this post.


func (k *KibanaConfig) sendKibanaApiGet(urlSuffix string) string {
webClient := web.CreateClient(0)

fullUrl := k.kibanaUrl + urlSuffix

headers := k.getHeaders()

var result string
webClient.Get(fullUrl, headers, &result)
return result
}

func (k *KibanaConfig) sendKibanaApiPost(
urlSuffix string,
headers map[string]string,
body interface{},
) string {
webClient := web.CreateClient(0)

fullUrl := k.kibanaUrl + urlSuffix

var result string
webClient.PostWithHeaders(fullUrl, body, headers, &result)
return result
}


We use the following structure to communicate with kibana:


type savedObjectAttributes struct {
Title string
}

type savedObject struct {
Id string
Attributes *savedObjectAttributes
}

type savedObjects struct {
SavedObjects []*savedObject `json:"saved_objects"`
}


For each of the dashboards, we get its details, and save it to a file.


func (k *KibanaConfig) exportDashboard(object *savedObject) {
title := object.Attributes.Title

body := exportBody{
Objects: []*exportObject{
{
Id: object.Id,
Type: "dashboard",
},
},
IncludeReferencesDeep: true,
}

headers := k.getHeaders()
data := k.sendKibanaApiPost("/api/saved_objects/_export", headers, &body)

outputPath := fmt.Sprintf("%v/%v.ndjson",
configFolder,
title,
)

err := os.WriteFile(outputPath, []byte(data), 0644)
if err != nil {
panic(err)
}
}


To import the dashboards to kibana we use the following method:


func listFolder(
folderPath string,
returnOnlyNames bool,
) []string {
files, err := os.ReadDir(folderPath)
if err != nil {
panic(err)
}

result := make([]string, 0)
for _, file := range files {
if returnOnlyNames {
result = append(result, file.Name())
} else {
result = append(result, folderPath+"/"+file.Name())
}
}
return result
}

func (k *KibanaConfig) ImportConfig() {
for _, filePath := range listFolder(configFolder, false) {
k.importDashboard(filePath)
}
}


The actual import uses form data to send the file.


func (k *KibanaConfig) importDashboard(filePath string) {
file, _ := os.Open(filePath)
defer func() {
err := file.Close()
if err != nil {
panic(err)
}
}()

body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
part, _ := writer.CreateFormFile("file", filepath.Base(file.Name()))
_, err := io.Copy(part, file)
if err != nil {
panic(err)
}
err = writer.Close()
if err != nil {
panic(err)
}

headers := k.getHeaders()
headers["Content-Type"] = writer.FormDataContentType()
k.sendKibanaApiPost("/api/saved_objects/_import?overwrite=true", headers, body)
}



Final Note


Once all is implemented, the usage is simple. To export the configuration, we use:

config := kibanaconfig.ProduceKibanaConfig(request.KibanaUrl, request.ElasticPassword)
config.ExportConfig()

And to import the configuration we use the following:

config := kibanaconfig.ProduceKibanaConfig(request.KibanaUrl, request.ElasticPassword)
config.ImportConfig()