Full Blog TOC

Full Blog Table Of Content with Keywords Available HERE

Tuesday, January 26, 2021

Get the Client URL in a ServiceWorker


 

In this post we will review a method of fetching the client URL in a ServiceWorker.

Note that I've already created a post for a ServiceWorker general concepts here.


To get the client, whose event the ServiceWorker is handling, the documentation informs us about the event.clientId API, and also about the clients.get API. However, the secret that is not told in the documentation, is that the API is async, while the ServiceWorker fetch event handler is synchronous.

The method to handle this, is by returning a promise in the event.respondWith API. In addition, to make sure the service worker is handling all clients, including the one it is activated from, use the clients.claim API.

So our final ServiceWorker looks as follows:



self.addEventListener('install', event => event.waitUntil(self.skipWaiting()))

self.addEventListener('activate', event => event.waitUntil(self.clients.claim()))

self.addEventListener('fetch', serviceWorkerListener)


function serviceWorkerListener(event) {
event.respondWith(serviceWorkerListenerAsync(event))
}

async function serviceWorkerListenerAsync(event) {
let clientUrl

if (event.clientId) {
const clients = await self.clients.matchAll({
includeUncontrolled: true,
type: 'window',
})
clients.forEach(client => {
if (client.id === event.clientId) {
clientUrl = client.url
}
})
}
return handleEvent(event, clientUrl)
}



Now we can implement the handleEvent function which receives the original event as well as the client URL.



Final Note


Using the method described above, the ServiceWorker can return different results for the same URL based on the client's location.url. This is a great method to add flexibility to your site.



Sunday, January 24, 2021

Kubernetes CoreDNS External Resolving




The Problem 


I have a bare metal kubernetes deployment, and the external DNS resolving is not working.

What does it mean?

When I enter a specific pod using: kubectl exec -it... and then run ping to a deployed kubernetes service, everything works fine.

But, when I try to access an external DNS, such as google.com, I get an error that the:
name resolution had failed.


Checking the CoreDNS pods log using 
kubectl logs --namespace=kube-system -l k8s-app=kube-dns 
displays errors:

[ERROR] plugin/errors: 2 google.com. A: read udp 192.168.204.66:53816->192.168.1.1:53: i/o timeout


The whole issue seems to be related to a change in the Ubuntu DNS (probably the new systemd-resolved service) which prevents the kubernetes CoreDNS pod to forward external DNS resolving to the Ubuntu DNS service.



The Bypass


I have bypassed this by configuring the CoreDNS to use the external DNS directly, instead of the local Ubuntu DNS.


Use the following steps to do this:


Edit the CoreDNS configuration:
kubectl -n kube-system edit configmap coredns

Change the line:
 forward . /etc/resolve.conf {
to:
 forward . 8.8.8.8 {

Restart the CoreDNS pods:
kubectl --namespace=kube-system delete pod -l k8s-app=kube-dns 



Final Note


I hope that the kubernetes community would solve this issue in future version. It is very common to use bare metal kubernetes deployment on an Ubuntu machine, and it is a shame that we need to manually patch it to make it work.

Thursday, January 21, 2021

Using ECDSA in Python to Sign and Verify messages

  





In this post we will review how to sign and verify messages in GO using ECDSA. This is an asymmetric encryption mechanism that is based on elliptic curve cryptography.

First, lets generate a private key.


from hashlib import sha256

from ecdsa import SigningKey, SECP256k1, VerifyingKey
from ecdsa.ellipticcurve import Point

private_key = SigningKey.generate(curve=SECP256k1)


The private key is based on a secret. This secret can be exported if we want to later import reload the same private key.


exported_secret = private_key.to_string()
private_key = SigningKey.from_string(exported_secret, curve=SECP256k1)


From the private key, we can deduce the public key. The public key is used for verification of the signature, and as its name implied, is public, so we should send to the party that needs to verify the sender identity. The public key can be send using the X,Y variables, and imported on the other party side.



public_key = private_key.get_verifying_key()
point = Point(SECP256k1.curve, public_key.pubkey.point.x(), public_key.pubkey.point.y())
public_key_imported = VerifyingKey.from_public_point(point, curve=SECP256k1)



Once we have a private key (generated or imported) we can sign messages. First we hash the message using sha256, and then we sign the hash. The result of the signature is two variables: R,S. These should be sent as additional metadata for authentication of the message sender identify (as well as the public key that we have already sent). 


message = b'hello'
signature = private_key.sign(message, sigencode=my_signature_encode, hashfunc=sha256)
print("R", signature[0])
print("S", signature[1])


To verify the message we use the public key.



def my_signature_decode(signature, order):
return signature[0], signature[1]


verify = public_key.verify(signature, message, sigdecode=my_signature_decode, hashfunc=sha256)
print("verify", verify)




Wednesday, January 20, 2021

Using ECDSA in JavaScript to Sign and Verify messages


  


In this post we will review how to sign and verify messages in JavaScript using ECDSA. This is an asymmetric encryption mechanism that is based on elliptic curve cryptography.


This post covers using a-symmetric encryption for sign/verify. In case of need of a-symmetric encryption to encrypt/decrypt, check this post.


First, lets generate a private key.



const EC = require('elliptic').ec
const ec = new EC('secp256k1')

function main() {
const seed = 'my-secret-password'
const privateKey = ec.keyFromPrivate(seed)



The private key is based on a seed. To recreate the private key, simply rerun this with the same seed.

From the private key, we can deduce the public key. The public key is used for verification of the signature, and as its name implied, is public, so we should send to the party that needs to verify the sender identity. The public key can be send using the X,Y variables, and imported on the other party side.



const publicKey = privateKey.getPublic()

const exportedPublicKey = JSON.stringify({
X: publicKey.getX().toString(16),
Y: publicKey.getY().toString(16),
})



Once we have a private key we can sign messages. First we hash the message using sha256, and then we sign the hash. The result of the signature is two variables: R,S. These should be sent as additional metadata for authentication of the message sender identify (as well as the public key that we have already sent). 



const message = 'Hello World!'
const hash = sha256(message)
const signature = privateKey.sign(hash)
const signatureData = {r: signature.r.toString(16), s: signature.s.toString(16)}



To verify the message we use the public key.



const valid = ec.verify(hash,signature,publicKey)




Using ECDSA in GO to Sign and Verify messages

 



In this post we will review how to sign and verify messages in GO using ECDSA. This is an asymmetric encryption mechanism that is based on elliptic curve cryptography.


This post covers using a-symmetric encryption for sign/verify. In case of need of a-symmetric encryption to encrypt/decrypt, check this post.


First, lets generate a private key.



import (
"crypto/ecdsa"
"crypto/rand"
"crypto/sha256"
"fmt"
"github.com/decred/dcrd/dcrec/secp256k1"
"math/big"
)

func main() {
privateKeySecp256k1, err := secp256k1.GeneratePrivateKey()
if err != nil {
panic(err)
}
privateKey := (*ecdsa.PrivateKey)(privateKeySecp256k1)



The private key is based on a secret. This secret can be exported if we want to later import reload the same private key.



exportedSecret := fmt.Sprintf("%x", privateKey.D)

d := new(big.Int)
d.SetString(exportedSecret, 16)
privateKey = new(ecdsa.PrivateKey)
privateKey.PublicKey.Curve = secp256k1.S256()
privateKey.D = d

privateKey.PublicKey.X, privateKey.PublicKey.Y = privateKey.PublicKey.Curve.ScalarBaseMult(privateKey.D.Bytes())



From the private key, we can deduce the public key. The public key is used for verification of the signature, and as its name implied, is public, so we should send to the party that needs to verify the sender identity. The public key can be send using the X,Y variables, and imported on the other party side.



publicKey := privateKey.PublicKey
exportedPublicX := fmt.Sprintf("%x", publicKey.X)
exportedPublicY := fmt.Sprintf("%x", publicKey.Y)

x := new(big.Int)
x.SetString(exportedPublicX, 16)
y := new(big.Int)
y.SetString(exportedPublicY, 16)

importedPublicKey := ecdsa.PublicKey{
Curve: secp256k1.S256(),
X: x,
Y: y,
}



Once we have a private key (generated or imported) we can sign messages. First we hash the message using sha256, and then we sign the hash. The result of the signature is two variables: R,S. These should be sent as additional metadata for authentication of the message sender identify (as well as the public key that we have already sent). 




message := "Hello World!"
hash := sha256.Sum256([]byte(message))

r, s, err := ecdsa.Sign(rand.Reader, privateKey, hash[:])
if err != nil {
panic(err)
}



To verify the message we use the public key.



publicKey := privateKey.PublicKey
verified := ecdsa.Verify(&publicKey, hash[:], r, s)
if !verified {
panic("verify failed")
}




Tuesday, January 12, 2021

Using Google Cloud CDN for a Kubernetes Deployment


 


This post reviews the step to use Google Cloud CDN for a kubernetes deployment on Google Kubernetes Engine (GKE). The CDN configuration is base on the kubernetes Ingress on GKE, and greatly simplifies the process on the CDN setup.


First configure our ingress ad part of the kubernetes deployment:


ingress.yaml

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
kubernetes.io/ingress.global-static-ip-name: "app-ip"
spec:
tls:
- secretName: ingress-secret
rules:
- host: app.com
http:
paths:
- path: /app/service1/*
backend:
serviceName: service1
servicePort: 80
- path: /app/service2/*
backend:
serviceName: service2
servicePort: 80


Once the ingress is deployed (it takes several minutes), it automatically creates backends for each service. In this example, as we have 2 services, it will creates two backends.


Now, let configure CDN for service2 using the gcloud CLI.



backend=$(kubectl get ingress ingress -o json | jq -j '.metadata.annotations."ingress.kubernetes.io/backends"' | jq '.' |grep service2 | cut -d\" -f2)
gcloud compute backend-services update --global ${backend} --enable-cdn --cache-mode=CACHE_ALL_STATIC



Notice that any CDN change take some time to apply, starting from minutes, and up to hours. 

The CDN starts serving our requests for service2. We will be able to view statistics for the served requests in GCP console, under Network Services, Cloud CDN. Note that 


A great method to check if a response is arriving from a CDN, is curl. We can run 

curl -v http://myapp.com

and then look for the age header which is added by the CDN, and indicates how many seconds had passed since the CDN first cached the response.

We can also check the performance of accessing the service from an end client using the ab tool:



sudo apt install apache2-utils
ab -c 2 -n 10 http://myapp.com/app/service2



One more item that requires our attention is the cache invalidation. As part of our CI/CD process we would like to invalidate the CDN cache, to ensure that a new version is downloaded to the clients. This can be done using the following CLI:



urlMap=$(gcloud compute url-maps list | grep app | awk '{print $1}')
gcloud compute url-maps invalidate-cdn-cache ${urlMap} --path "/*"



Additional information can be found at:



Monday, January 11, 2021

Script to Rebuild a Kubernetes Pod



 

Many developers that work in a kubernetes environment, run the kubernetes cluster on the local development machine, as it simplifies the development process, and it is faster and cheaper than using a remote kubernetes on the cloud, or on VMs.

Usually when you change a source of one of the pods, you want to replace it, so you need to follow the next steps:

  • build the changed container image from the sources
  • delete the existing pod
  • wait for the new pod (which uses the new built container) to start
I used to do these steps manually, until I took one more step, and automate it, with the following script:



#!/usr/bin/env bash
set -e
cd "$(dirname "$0")"
IMAGE=$1

function podReady(){
count=$(kubectl get pods -l configid=${IMAGE}-container | grep Running | wc -l)
if [[ "${count}" == "0" ]]; then
return 1
else
return 0
fi
}

echo "replace image ${IMAGE}"
./images/${IMAGE}/build.sh
kubectl delete pod -l configid=${IMAGE}-container&
sleep 2

until podReady
do
echo "waiting for pod ${IMAGE}"
sleep 1
done

kubectl get pods -l configid=${IMAGE}-container

echo "done"


The script receives the container name as argument, and locates the related pod according to a label with this name.

Then it kills the existing pod, and completes once the new pod is running. It does not keep you busy waiting until the old pod is fully deleted, so it saves more of your time.


Whenever I use this script I feel great for the saved time. 
Very recommended to use similar script for your own development...



Wednesday, January 6, 2021

Service Worker


 

In this post we will review the Service Worker:


" A service worker is a script that your browser runs in the background, separate from a web page, opening the door to features that don't need a web page or user interaction "


What are these "new doors"? Run a service worker, enables control over the network requests sent by the page to any server. The main goal of a service worker is to manage cache, so you have full control on the caching of the items, including the cache expiration. The service worker also enables some additional inspection of the requests and responses for any other purposes, e.g. logging, security, and more.

Notice: the service worker will be run only on SSL sites and on the localhost site (for development purposes).


To use a service worker add the following to the main page of the application.


index.html

if ('serviceWorker' in navigator) {
window.addEventListener('load', function () {
navigator.serviceWorker.register('/serviceworker').then(function (registration) {
}, function (err) {
console.log('ServiceWorker registration failed: ', err)
})
})
}



Then implement the service worker on its own file.


serviceworker.js

self.addEventListener('fetch', function (event) {
let response = event.request
if (shouldBeSent(url)) {
response = fetch(response)
} else {
response = getFromCache()
}
event.respondWith(response)
})




The service worker can use the caches API to return a response from the cache, or to save it to the cache. Notice however, that the service worker is just another layer above the browser cache API, so for example, if the service worker decides not to handle the fetch event, and run the standard fetch, you might still end up with getting the actual resource from the browser local cache.



Final Note


There are many articles explaining the lifecycle of a service worker, but for the most cases you don't really care much about it. The one item you do need to pay attention for is that the service worker will not be replaced as long as it serve a page, so if you're changing a service worker code, close all site tabs to allow the browser to use the new version.

An alternative is to use the chrome debug window, which is very useful for other purposes as well, and manually reload it: