Monday, January 31, 2022

Copy to Clipboard



 

In this post we will review a full working implementation of a "copy to clipboard" in react. This includes the javascript code to handle the copy, the react code to handle the show/hide of the "text copied" message upon 2 seconds timeout, and a css styling for the message on top.


We start with the react & redux template:


npx create-react-app my-app --template redux


Now update the application file to hold the following:


App.js

import React, {useState} from 'react'
import './App.css'

function App() {
const [show, setShow] = useState(false)
const text = 'this is my text, click on the copy icon to copy it.'

function click() {
navigator.clipboard.writeText(text).then(() => {
setShow(true)
setTimeout(() => {
setShow(false)
}, 2000)
})
}

let copiedMessage = null
if (show) {
copiedMessage = (
<div
className="copy-message"
>
Copied to clipboard
</div>

)
}
return (
<div className="root">
<div className="header">Right below, we have the example.</div>
<div className="container">
<div className="copy-icon" onClick={click}>{copiedMessage}</div>
<div className="text">{text}</div>
</div>
<div className="header">The end of the example.</div>
</div>
)
}

export default App



The application includes an optional div which contains the "copied to clipboard" message. This div is displayed for 2 seconds after clicking the copy to clipboard icon. To implement this, we use react's useState function.


The actual copy of the text is done using the navigator.clipboard. In case of need of old browsers backward compatibility, use different methods, and described here.


We include the stylesheet to hold the following:


App.css

.root{
font-size: 30px;
}
.container {
display: flex;
flex-direction: row;
}

.copy-icon {
background: url('copy.png') center;
background-size: cover;
margin: 5px;
height: 25px;
width: 25px;
padding: 0;
z-index: 99;
cursor: pointer;
}

.header{
margin: 30px;
}

.copy-message {
font-size: 25px;
float: left;
border: 1px solid black;
background: white;
width: 250px;
height: 30px;
margin-left: 20px;
margin-top: 20px;
}



We use flex to display the icon and the text side by side.

To ensure the "copied to clipboard" message is on top, we use both z-index and floating mode.





Sunday, January 23, 2022

Using Plotly Scatter Plot in React

 


In this post we will review usage of plotly based scatter plot in a react application.


We start by creating the react&redux template:


npx create-react-app my-app --template redux



Next we add the the react plotly library:


npm i -s react-plotly.js



And update the application to show our scatter plot:



import React from 'react'
import './App.css'
import createPlotlyComponent from 'react-plotly.js/factory'

function App() {
const Plot = createPlotlyComponent(window.Plotly)

return (
<div className="App">
<header className="App-header">
<Plot
data={[
{
x: [1, 2, 3, 1],
y: [1, 2, 3, 5],
text: ['p1', 'p2', 'p3', 'p4'],
type: 'scatter',
mode: 'lines+markers',
marker: {
color: 'green',
size: 10,
},
},
]}
layout={{
width: 800,
height: 600,
}
}
/>
</header>
</div>
)
}

export default App



Notice that we do not use the plotly component directly. This is since we want to reduce the size of the compiled application bundle. Instead of including the entire bundle of plotly and d3, we will download it from the CDN using the following update in the application HTML file. The npm will still download all of the bundle dependencies, but the webpack will include only the used resources, and hence the plotly and the d3 will not be included in the bundle.


index.html

<script crossorigin src="https://cdn.plot.ly/plotly-latest.min.js"></script>


And the result is a nice scatter plot:




Final Note


We have reviewed the process of adding a scatter plot to a react application. Scatter plot can be used to show, for example, location overtime. We have specifically used it to show mouse movement.


Saturday, January 15, 2022

Using Let's Encrypt in GKE Ingress



 

In this post we will review the steps to create a signed SSL certificate for our site, running in Google Kubernetes Engine (aka GKE) and using Ingress to handle the incoming traffic.

The SSL certificate is created by the Let's Encrypt service, which will automatically, and free of charge supplies the signing service, and renewal service, based on the ownership of the domain in the DNS. This works only if traffic the the related domain that we are signing is sent to the GKE ingress, hence assuring that the SSL certificate requester is indeed authentic.

In the following example, we wee present signing of 2 FQDNs:
www.my-domain.com, api.my-domain.com


After connecting to the GKE cluster, use helm to install cert manager, which will manage the SSL certificate creation and renewals.


kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.2.0/cert-manager.crds.yaml
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.2.0
kubectl get pods --namespace cert-manager


The following steps include 2 parts. First we use a test/staging let's encrypt service to ensure that the integration with let's encrypt indeed works well. Only then we move to the production let's encrypt service. This is since the production let's encrypt service has some limits on the requests amounts, and the certificate singing is slower.


Using Staging Let's Encrypt

First we create a test domain certificate.


cat <<EOF > test-resources.yaml
apiVersion: v1
kind: Namespace
metadata:
name: cert-manager-test
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: test-selfsigned
namespace: cert-manager-test
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: selfsigned-cert
namespace: cert-manager-test
spec:
dnsNames:
- example.com
secretName: selfsigned-cert-tls
issuerRef:
name: test-selfsigned
EOF


kubectl apply -f test-resources.yaml



And check that it is working without errors using the command:


kubectl describe certificate -n cert-manager-test


And once we see all is working fine, remove the test domain certificate:


kubectl delete -f test-resources.yaml



Now, let  move to the actual staging issuer:


cat <<EOF > clusterissuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: john.doe@my-email.com
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: ingress-gce
EOF



kubectl apply -f clusterissuer.yaml


and use the following command to check that the issuer is ready:


kubectl describe clusterissuer letsencrypt-staging


And next update the ingress to use the certificate manager, by adding the annotations, and update the tls section.


annotations:
cert-manager.io/cluster-issuer: letsencrypt-staging
acme.cert-manager.io/http01-edit-in-place: "true"
tls:
- secretName: ingress-secret-letsencrypt
hosts:
- www.my-domain.com
- api.my-domain.com


Use the following commands to following the certificate sign process:


kubectl get certificate
kubectl describe certificate ingress-secret-letsencrypt
kubectl describe secret ingress-secret-letsencrypt


Once the process is done, wait for the ingress related load balancer to be updated. It would take about 15 minutes. Then connecting to the domain, we still get an invalid certificate error, since it was signed by the staging service, but when viewing the certificate details, we can see that it was signed by the staging service, and this indicates that we can move to the next step, to use the production service.


Move to Production Let's Encrypt

As we've done before, we create a issuer, but this time for the production service.



cat <<EOF > clusterissuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: john.doe@my-email.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: ingress-gce
EOF


kubectl apply -f clusterissuer.yaml
kubectl delete secret ingress-secret-letsencrypt


And track the progress using the commands:


kubectl describe clusterissuer letsencrypt-prod
kubectl get order


Next, we should wait again for the ingress load balancer to update, for another ~15 minutes, and then check that the certificate is indeed valid.



Final Note


In case of need, see also how to start with a self signed certificate GKE ingress in the following post.

Monday, January 10, 2022

Print Git History Changes in Google Cloud Build


 


In previous posts we have reviewed how to use google cloud build, and how to manage dependencies of multiple build triggers. In this post we will review how can we print the list of git changes since the last successful build.

Our build is running a shell, using the predefined google builder image, and is running a shell file. We send the current build git hash as an argument to the shell. Hence the trigger configuration is the following:


name: build

triggerTemplate:
branchName: .*
projectId: my-project
repoName: my-repo

build:
timeout: 3600s
steps:
- id: main
name: gcr.io/cloud-builders/gcloud
entrypoint: bash
timeout: 3600s
args:
- /build.sh
- ${BRANCH_NAME}
- ${COMMIT_SHA}
- ${SHORT_SHA}



The build.sh file gets the arguments:



branchName=$1
commitSha=$2
shortSha=$3



and includes the following:



shaFile="${branchName}_git_commit.sha"


function printGitLog() {
set +e
gsutil -q stat gs://my-google-storge-bucket/${shaFile}
rc=$?
set -e
if [[ "${rc}" == "0" ]]; then
echo "=== changes from last successful build ==="
gsutil cp gs://my-google-storge-bucket/${shaFile} ./${shaFile}
prevSha=`cat ${shaFile}`
git fetch --depth=100

# do not fail the build if we cannot find the range in the last 100 commits
set +e
git log ${prevSha}..${commitSha}
set -e
echo "=========================================="
else
echo "no previous sha file"
fi
}

function saveGitSha() {
echo "Saving build sha to GCP storage"
echo ${commitSha} > ${shaFile}_temp
tr -d '\t\n ' < ${shaFile}_temp > ${shaFile}
rm -f ${shaFile}_temp

gsutil cp ${shaFile} gs://my-google-storge-bucket
}



We should call the saveGitSha function upon successful build. This would save the build hash to a file in google storage (you should manually create this folder in google storage). 

Also, upon build start we call to printGitLog, which retrieve the last successful hash from the folder in the google storage, fetches the git history, and then prints the changes between the current build hash to the last successful hash.




Monday, January 3, 2022

Installing Kubernetes on your Development Machine

 


I hear many developer that are using minikube to run kubernetes on their development machine.

Working on your local kubernetes during development stage can in many cases shorten development life-cycle, due to ability to quickly observe end to end behavior of the system. However, using minikube does not seems like the right way to do it. Minikube use a VM running on the development machine, hence the performance of the kubernetes cluster, and the load on the development machine are high.

Instead of using minikube, a good alternative is to install a bar metal kubernetes on the development machine. It might talk a bit longer to install, but the simplicity and performance of it sure make this method way better.


To install the kube* binaries on the development machine use the following script:


echo "Update the apt"
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

echo "Download the GCP signing key"
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

echo "Add the Kubernetes apt repo"
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

echo "Update kube* binaries"
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl



To install kubernetes on the development machine, we can use the following script:



# Install cluster
sudo rm -rf $HOME/.kube
sudo rm -rf /var/lib/etcd
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl taint nodes --all node-role.kubernetes.io/master-

# Install calico
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
rm -f calico.yaml



If we're already using a script, why not install other requirements as well, such as Helm:



# Install Helm
rm -rf ~/.helm
sudo rm -rf /usr/local/bin/helm
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh
rm ./get_helm.sh



And you can go on and add other requirements for your application as part of this cluster, such as metrics server, or anything you might require on the kubernetes cluster.