Full Blog TOC

Full Blog Table Of Content with Keywords Available HERE

Wednesday, November 27, 2019

MongoDB ReplicaSet on kubernetes



Recently I wanted to use MongoDB ReplicaSet on kubernetes.
I did not want to use MongoDB cluster, as it is too complex to configure and maintain in a kubernetes environment. I also did not need high MongoDB performance, so sharding was also not required.

Gladly, I've found a Helm chart in the official helm chart github (see here).
It even included special handling for kubernetes pod initialization, in a dedicated script: on-init.sh

However, I've found this is not working.



There were several issues. For example:

  1. Once a MongoDB pod was added to a ReplicaSet configuration, it was never removed. This causes several problems. For example, inability to get quorum once a pod had been restarted and got a different IP.
  2. In case a MongoDB pod had started, but was not able to communicate with the previous pods, it started its own new ReplicaSet, instead of failing.
  3. A single MongoDB pod, which was restarted, and got a new IP, was not able to start the ReplicaSet, and got error that it is not part of its own replica.



I've created my own init container to manage the ReplicaSet configuration. 
It was inspired by the on-init.sh script.
I've decided to use nodeJS, as the logic was too much for a simple shell script.

First, I've created a StatefulSet:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb-statefulset
spec:
  replicas: 3
  selector:
    matchLabels:
      configid: mongodb-container
  template:
    metadata:
      labels:
        configid: mongodb-container
    spec:
      serviceAccountName: mongodb-service-account
      initContainers:
        - name: init
          image: LOCACL_REGISTRY/mongo-init/dev:latest
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          volumeMounts:
            - name: persist-data
              mountPath: /data
              readOnly: false
      containers:
        - name: mongodb
          image: mongo:4.2.1
          imagePullPolicy: Always
          command:
            - mongod
          args:
            - --config=/mongo-config/mongo.conf
            - --dbpath=/data
            - --replSet=mongoreplica
            - --port=27017
            - --bind_ip=0.0.0.0
          volumeMounts:
            - name: persist-data
              mountPath: /data
              readOnly: false
            - name: mongo-config
              mountPath: /mongo-config
      volumes:
        - name: mongo-config
          configMap:
            name: mongo-config
  volumeClaimTemplates:
    - metadata:
        name: persist-data
      spec:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: "hostpath"

The StatefulSet includes an init container which runs the nodeJS code to configure the ReplicaSet (this will be covered later in this article).
In addition, it includes a volumeClaimTemplate that allocates the same storage for the related pod even it is restarted.
Also, a config map with mongo.conf file is included. I've currently used an empty file, as I require only the default configuration.

Notice that the StatefulSet includes a serviceAccountName. This account should be granted with permissions to list and get pods.

Now, let review the mongo-init container.
This includes a nodeJS application to configure the MongoDB ReplicaSet.




The following general logic is implemented:

  • Start Mongo DB
  • Run kubectl to get all  related pod IPs
  • Check each of the pods, to located the Mongo DB primary
  • If primary is located, and it is myself, we're done
  • If primary is located, and it is another pod, find my configuration index according to the pod name, e.g. pod name mongodo-statefulset-4 is index#4. If this index exists in the configuration, update the IP in the existing replica. Otherwise, add new secondary.
  • If primary is not located, start a new ReplicaSet.
  • Stop Mongo DB, and wait for shutdown, allowing it to save the updated configuratoin


const podIp = process.env['POD_IP']
const podName = process.env['HOSTNAME']
const mongoReplicaSetName = 'mongoreplica'
const mongoPort = 27017'
const addSecondaryRetries = 20

init()

async function init() {
  await startMongo()
  await configureReplicaSet()
  await stopMongo()
}

async function startMongo() {
  const command = `mongod --config=files/mongo.conf --dbpath=files/data --replSet=${mongoReplicaSetName} --port=${mongoPort} --bind_ip=0.0.0.0`
  const options = {
    checkError: false,
    waitForCompletion: false,
    streamOutputCallback: mongoOutputCallback,
  }
  const promiseWrapper = await runCommand(command, options)
  const commandPromise = promiseWrapper.promise

  const startedPromise = waitForPing()

  const result = await Promise.race([commandPromise, startedPromise])
  if (!result.mongoResponse) {
    throw new Error(`mongo start failed`)
  }
}

function mongoOutputCallback(data) {
  data.split('\n').forEach(line => {
    line = line.trim()
    if (line.length > 0) {
      console.log(`[MONGO] ${line}`)
    }
  })
}

async function stopMongo() {
  const mongoCommand = `db.shutdownServer({force: true})`
  await runMongoAdmin(false, '127.0.0.1', mongoCommand)
  await waitForMongoStop()
}

async function waitForMongoStop() {
  while (true) {
    const processes = await runCommand('ps -ef | grep mongo | grep -v grep', {checkError: false})
    if (processes.trim().length === 0) {
      return
    }

    await sleep(1000)
  }
}

async function configureReplicaSet() {
  const primary = await findPrimaryNode()
  if (primary) {
    await configureReplicaWithExistingPrimary(primary)
  } else {
    if (await isSecondaryPodLocated()) {
      throw new Error('secondary pod located, unable to start until primary located')
    }
    if (!await isReplicaConfigured()) {
      await createNewReplicaSet()
    }
  }
}

async function configureReplicaWithExistingPrimary(primary) {
  if (primary === podIp) {
    return
  }

  const memberIndex = await findMemberIndex(primary)
  if (memberIndex === null) {
    await addAsSecondary(primary)
  } else {
    await updateMemberAddress(primary, memberIndex)
  }
  await waitUntilSecondaryReady()
}

async function updateMemberAddress(primary, memberIndex) {
  const mongoCommand = `c=rs.conf(); c.members[${memberIndex}].host='${podIp}'; rs.reconfig(c)`
  await runMongoAdmin(true, primary, mongoCommand)
}

async function findMemberIndex(primary) {
  const configuration = await getReplicaConfiguration(primary)
  const members = configuration.members
  const podSuffix = parseInt(podName.substring(podName.lastIndexOf('-') + 1))

  for (let i = 0; i < members.length; i++) {
    const member = members[i]
    const memberId = parseInt(member['_id'])
    if (memberId === podSuffix) {
      return i
    }
  }
  return null
}

async function getReplicaConfiguration(primary) {
  let configurationText = await runMongoAdmin(true, primary, `rs.conf()`)
  configurationText = configurationText.replace(/NumberLong\((\d+)\)/g, '$1')
  configurationText = configurationText.replace(/ObjectId\("(\S+)"\)/g, '"$1"')
  return JSON.parse(configurationText)
}

async function isReplicaConfigured() {
  const result = await runMongoAdmin(false, '127.0.0.1', 'rs.status()')
  const configured = !result.includes('no replset config has been received')
  return configured
}

async function addAsSecondary(primary) {
  for (let i = 1; i <= addSecondaryRetries; i++) {
    try {
      await addAsSecondaryOnce(primary)
      return
    } catch (e) {
      console.log(`add secondary node failed: ${e.stack}`)
      if (i < addSecondaryRetries ) {
        console.log(`retry #${i} in a moment`)
        await sleep(10000)
      }
    }
  }
  throw new Error(`add node as secondary failed after ${addSecondaryRetries} retries`)
}

async function addAsSecondaryOnce(primary) {
  const mongoCommand = `rs.add('${podIp}:${mongoPort}')`
  const result = await runMongoAdmin(true, primary, mongoCommand)
  if (result.includes(`Quorum check failed`)) {
    throw new Error('add node as secondary failed')
  }
}

async function createNewReplicaSet() {
  const mongoCommand = `rs.initiate({'_id': '${mongoReplicaSetName}', 'members': [{'_id': 0, 'host': '${podIp}'}]})`
  try {
    await runMongoAdmin(true, '127.0.0.1', mongoCommand)
  } catch (e) {
    // replica set configuration might popup ony now, so we recheck the replica set status
    console.log(`create replica failed: ${e.stack}`)
    if (!await reconfigureReplicaSetIfPossible()) {
      throw e
    }
  }
  await waitForMasterReady('127.0.0.1')
}

async function reconfigureReplicaSetIfPossible() {
  const result = await runMongoAdmin(true, '127.0.0.1', `rs.status()`)
  if (!result.includes('we are not a member of it')) {
    return false
  }

  await reconfigureReplicaSet()
  return true
}

async function reconfigureReplicaSet() {
  const mongoCommand = `\
    c=rs.conf(); \
    c.members.splice(1); \
    c.members[0].host='${podIp}'; \
    rs.reconfig(c, {force: true}) \
  `

  await runMongoAdmin(true, '127.0.0.1', mongoCommand)
}

async function findPrimaryNode() {
  const ips = await getPodsIps()
  for (let i = 0; i < ips.length; i++) {
    const ip = ips[i]
    if (await isPrimary(ip)) {
      return ip
    }
  }
}

async function isSecondaryPodLocated() {
  const ips = await getPodsIps()
  for (let i = 0; i < ips.length; i++) {
    const ip = ips[i]
    if (await isSecondary(ip)) {
      return true
    }
  }
  return false
}

async function isSecondary(ip) {
  const state = await runMongoAdmin(false, ip, 'rs.status().myState')
  return state === '2'
}

async function isPrimary(ip) {
  const state = await runMongoAdmin(false, ip, 'rs.status().myState')
  if (state !== '1') {
    return false
  }

  await waitForMasterReady(ip)
  return true
}

async function getPodsIps() {
  let args = `get pods -l configid=mongodb-container -o jsonpath='{range.items[*]}{.status.podIP} '`
  const stdout = await kubectl.runKubectl(true, args)
  const ips = []
  stdout.trim().split(' ').forEach(ip => {
    if (ip.trim().length > 0) {
      ips.push(ip)
    }
  })
  return ips
}

async function waitForPing() {
  return await runMongoAdminUntilResponse('127.0.0.1', `db.adminCommand('ping').ok`, '1')
}

async function waitForMasterReady(host) {
  return await runMongoAdminUntilResponse(host, `db.isMaster().ismaster`, 'true')
}

async function waitUntilSecondaryReady() {
  return await runMongoAdminUntilResponse('127.0.0.1', `rs.status().myState`, '2')
}

async function runMongoAdminUntilResponse(host, mongoCommand, expectedResult) {
  const startTime = new Date().getTime()
  while (true) {
    const result = await runMongoAdmin(false, host, mongoCommand)
    if (result === expectedResult) {
      break
    }

    const passedTime = new Date().getTime() - startTime
    if (passedTime > 120000) {
      throw new Error(`timeout waiting for good response from command: ${mongoCommand}\nLast response was: ${result}`)
    }
    await sleep(1000)
  }
  return {
    mongoResponse: true,
  }
}

async function runMongoAdmin(checkError, host, mongoCommand) {
  const commandLine = `mongo admin --host ${host} --quiet --eval "${mongoCommand}"`
  let result = await runCommand(commandLine, {checkError: checkError})
  result = result.trim()
  return result.trim()
}


The code uses the following helpers:


const {exec} = require('child_process')
async function runCommand(commandLine, options) {
  if (options === undefined) {
    options = {}
  }
  if (options.checkError === undefined) {
    options.checkError = true
  }
  if (options.waitForCompletion === undefined) {
    options.waitForCompletion = true
  }
  const promise = new Promise((resolve, reject) => {

    const execHandler = exec(commandLine, (err, stdout, stderr) => {
      let result = ''
      if (stdout) {
        result += stdout
      }
      if (stderr) {
        result += stderr
      }

      if (err) {
        if (options.checkError) {
          reject(err)
        } else {
          resolve(result)
        }
        return
      }

      resolve(result)
    })

    if (options.streamOutputCallback) {
      execHandler.stdout.on('data', (data) => {
        options.streamOutputCallback(data)
      })
    }
  })

  if (options.waitForCompletion) {
    return await promise
  }

  return {promise: promise}
}



async function sleep(time) {
  await new Promise((resolve) => {
    setTimeout(() => {
      resolve()
    }, time)
  })
}

Summary

I've successfully run MongoDB ReplicaSet on kubernetes.
The ReplicaSet has proven to recover from both partial and full restart of the pods.




Wednesday, November 20, 2019

redis cluster survivability in kubernetes

A week ago, I've published an article of how to deploy redis cluster on kubernetes.
In the article, I've added an entrypoint.sh, which handles the IP change of the redis pod in case it is restarted.

But, that's not enough.

In case the entire redis cluster pods are restarted, the redis pods will no longer be able to communicate with each other.

Why?

Each redis pods holds a nodes.conf file, which includes list of the redis nodes. Each line in the file contains the redis node ID, and the redis node IP. If we restart all of the redis cluster nodes, all nodes IPs are changing, hence the redis node cannot re-establish connection with then according to the out of date IPs in the nodes.conf.

How can we solve this?



The general solution is as follows:

  1. Create a kubernetes config map holding mapping of resdis node ID to kubernetes pod name.
    The kubernetes pod name is assured not to change, since we are using a kubernetes StatefulSet. An example of such config is:

    79315a4ceef00496afc8fa7a97874e5b71dc547b  redis-statefulset-1
    b4d9be9e397d19c63bce602a8661b85ccc6e2d1d redis-statefulset-2
  2. Upon each redis pod startup read the pods config map, and find the new IP by the pod name, and then replace it in the nodes.conf. This can be done as a nodejs initContainer running in the redis node before the redis container. This container should be have the pods config map mapped a volume, for example under /nodes.txt.
The redis config update init container code is below.
const fs = require('fs')

init()

async function init() {
  const nodesConfPath = '/data/nodes.conf'
  if (!fs.existsSync(nodesConfPath)) {
    return
  }

  const configuration = fs.readFileSync(nodesConfPath, 'utf8')
  const updatedConfiguration = await updateConfiguration(configuration)
  fs.writeFileSync(nodesConfOutputPath, updatedConfiguration)
}

async function updateConfiguration(configuration) {
  const lines = []
  const configLines = configuration.split('\n')
  for (let i=0; i 0) {
      lines.push(await updateConfigurationLine(line))
    }
  })
  return lines.join('\n')
}

async function updateConfigurationLine(line) {
  const sections = line.match(/(\S+) (\S+)(:.*)/)
  if (sections == null) {
    return line
  }
  const nodeId = sections[1]
  const nodeIp = sections[2]
  const other = sections[3]
  const currentNodeIp = await getCurrentNodeIp(nodeId, nodeIp)
  return `${nodeId} ${currentNodeIp}${other}`
}

async function getCurrentNodeIp(nodeId, nodeIp) {
  const nodesPods = fs.readFileSync(nodesPath, 'utf8')
  const nodesPodsLines = nodesPods.split('\n')
  for (let i=0; i< nodesPodsLines.length; i++) {
    const line = nodesPodsLines[i].trim()
    if (line.length > 0) {
      const sections = line.split(' ')
      const configuredNodeId = sections[0]
      const configuredPodName = sections[1]
      if (configuredNodeId === nodeId) {
        const existingNodeIp = await fetchPodIpByName(configuredPodName)
        if (existingNodeIp != null) {
          nodeIp = existingNodeIp
        }
      }
    }
  })

  return nodeIp
}

async function fetchPodIpByName(podName) {
  const jsonParse = '{.status.podIP}'
  const args = `get pods ${podName} -o jsonpath='${jsonParse}'`
  const stdout = await kubectl(args)
  const ip = stdout.match(/(\d+\.\d+\.\d+\.\d+)/)
  if (ip) {
    return ip[1]
  }

  return null
}


async function kubectl(args) {
  return await new Promise((resolve, reject) => {
    const commandLine = `kubectl ${args}`
    exec(commandLine, (err, stdout, stderr) => {
      if (err) {
        reject(err)
        return
      }
      resolve(stdout)
    })
  })
}



Summary

Using the IPs updater init container, in combination with the config map, allows redis cluster to fully recover from both full and partial restarts. Notice that the init container should be granted with permissions to execute list and get for the pods resource.


Don't use ethereum bootnode



Ethereum documentation recommends using bootnode for private ethereum networks.
Once the boot node is installed, any geth based miner specifies the boot node ip and key, for example:

geth --bootnodes "enode:// BOOT_NOT_KEY @ BOOT_NODE_IP :30301"

This way, the boot node is notified of any new miner that starts, and update the new miner, using the gossip protocol, of any other existing miners, hence allowing the new miner to join the ethereum network.

Sounds fine, right?

Well... Not perfect.

This design works only as long as the boot node is alive.
Once the boot node crashes, new nodes cannot connect to the ethereum network.
The boot node is a single point of failure.

You can state: that's fine. Run the boot node as part of kubernetes, or docker swarm, and let it automatically restart the boot node if it crashes.

But let's examine this:
The boot node will start, from scratch. No miner peers to talk with, so all the existing miners will be part of the old ethereum network, while new miners will establish a new ethereum network, separated from the old one.

What is the solution?

The solution involves storing a set of all miners in an external database, such as MongoDB, or in case running in kubernetes, the kubernetes ConfigMap can be used.
Wrap each miner startup with the following logic:

  1. Generate node key using the bootnode -genkey command
  2. Store the current miner enode address in the external set of miners. The enode address is in format: enode://NODE_KEY@NODE_IP:30303
  3. Fetch all enodes addresses from the external set of miners, and run geth using all the addresses separated by comma. For example:
    geth --bootnodes "enode://BOOT_NOT_KEY1@BOOT_NODE_IP1 :30301,enode://BOOT_NOT_KEY2@BOOT_NODE_IP2 :30301"

Summary

Using the external set of miners has multiple advantages:
  1. There is no single point of failure
  2. Miners always join the same ethereum network, and do not create a separate one
  3. The bootnode process is no longer required (though we do need an external location to save the set of miners)


Wednesday, November 13, 2019

Deploy redis cluster on kubernetes








Notice: See an update for this issue in the next article: redis survivability in kubernetes




Redis is an in-memory data store that can be used as cache and message broker.
It is very useful in a kubernetes environment where multiple replicas of one deployment might need to save/pass information that will later be used by replicas of another deployment.

To deploy redis on kubernetes, I've used a simple working implementation.
This article presents the steps I've made:

  • Create a docker image to wrap the redis, and add some files
  • Create a kubernetes statefulset
  • Create a kubernetes service




The Docker Image 


A docker image is required to wrap the original redis docker images with some extensions.
Notice that you can do this even without creating your own image, but instead by updating the changes using the kubernetes deployment. However, I've found it more clear to create a dedicated image.

Dockerfile:

FROM redis:5.0.6
COPY files /
ENTRYPOINT /entrypoint.sh

The 'files' folder contains two files:

redis.conf:

  • configuring the instance as part of cluster enabled 
  • specifying path of the redis configuration files

port 6379
cluster-enabled yes
cluster-require-full-coverage yes
cluster-node-timeout 5000
appendonly yes
cluster-config-file /data/HOSTNAME/nodes.conf
entrypoint.sh:

  • The entrypoint script replaces the IP in the nodes.conf. This is required to allow redis to identify itself in case of a pod restart. The script itself was originated from this issue.
  • Once the IP address is handled, start the redis server

#!/usr/bin/env bash
set -e
HOST_FOLDER=/data/${HOSTNAME}
CLUSTER_CONFIG="${HOST_FOLDER}/nodes.conf"
mkdir -p ${HOST_FOLDER}

if [[ -f ${CLUSTER_CONFIG} ]]; then
  if [ -z "${POD_IP}" ]; then
    echo "Unable to determine Pod IP address!"
    exit 1
  fi
  echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}"
  sed -i.bak -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG}
fi

sed -i "s/HOSTNAME/${HOSTNAME}/g" /redis.conf
exec /usr/local/bin/redis-server /redis.conf


The Kubernetes StatefulSet







A kubernetes StatefulSet (unlike deployment) is required to ensure that the pod host name remains the same between restarts. This allows the entrypoint.sh script (mentioned above) to replace the IP based on the (Pod) host name.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis-statefulset
spec:
  serviceName: redis-service
  podManagementPolicy: "Parallel"
  replicas: 6
  selector:
    matchLabels:
      configid: redis-container
  template:
    metadata:
      labels:
        configid: redis-container        
    spec:
      containers:
      - name: redis
        image: my-registry/my-redis:latest
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        volumeMounts:
        - name: redis-data
          mountPath: /data
          readOnly: false
        livenessProbe:
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3
          initialDelaySeconds: 5
          periodSeconds: 10
          exec:
            command:
              - /usr/local/bin/redis-cli
              - ping
        readinessProbe:
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 1
          initialDelaySeconds: 5
          periodSeconds: 10
          exec:
            command:
              - /usr/local/bin/redis-cli
              - ping
      volumes:
      - name: redis-data
        hostPath:
          path: /opt/redis


In this case 6 replicas are used, so we will have 3 masters and 3 slaves.
We also include liveness and readiness probes. These only check that the redis instance is alive, but do not check the cluster health (out of scope for this article).
The image specified as my-registry/my-redis should point to the docker image, which was created at the previous step.




The Kubernetes Service









The service exposes 2 ports:

  • 6379 is used by clients connecting to the redis cluster
  • 16379 is used by the redis instances for the cluster management
apiVersion: v1
kind: Service
metadata:
  name: redis-service
  labels:    
spec:
  selector:
    configid: redis-container
  type: NodePort
  ports:
      - port: 6379
        targetPort: 6379
        name: clients
        protocol: TCP
        nodePort: 30002
      - port: 16379
        targetPort: 16379
        name: gossip
        protocol: TCP

Redis Cluster Initialization

The last step is to tell redis to create cluster.
This can be done, for example, as part of a helm post install script.
I've used a javascript to wait for all of the redis instances to start, and then configure the cluster using kubectl exec on the first redis pod.

const {exec} = require('child_process')

const redisPods = 6


async function kubectl(args) {
  return await new Promise((resolve, reject) => {
    const commandLine = `kubectl ${args}`
    exec(commandLine, (err, stdout, stderr) => {
      if (err) {
        reject(err)
        return
      }
      resolve(stdout)
    })
  })
}

async function getRedisPods() {
  const args = `get pods -l configid=redis-container -o jsonpath='{range.items[*]}{.status.podIP} '`
  const stdout = await kubectl(args)
  return stdout.trim().split(' ')
}

async function executeClusterCreate(pods) {
  let redisNodes = ''
  pods.forEach(p => {
    redisNodes += ` ${p}:6379`
  })

  const command = 'exec redis-statefulset-0 ' +
    '-- redis-cli --cluster create --cluster-replicas 1 --cluster-yes ' +
    redisNodes

  const createResult = await kubectl(command)
  if (createResult.includes('[OK] All 16384 slots covered')) {
    return true
  }
  return false
}

async function configureRedisCluster() {
  while (true) {
    const pods = await getRedisPods()

    if (pods.length !== redisPods) {
      continue
    }

    if (!await executeClusterCreate(pods)) {
      console.warn(`create cluster failed, retrying in a moment`)
      await timeUtil.sleep(1000)
      continue
    }

    return
  }
}

configureRedisCluster()


If you run this script as part of a helm post install hook job, you will need to add permissions for the job to get and list pods, and to execute on pods.

Summary

After all these steps, we have a running redis cluster, BUT the redis cluster is not transparent to it users. A user addressing one redis instance might get a MOVED redirection (see Moved Redirection at redis cluster specification documentation), which means that there is another instance which handles the related information, and it is up to the client to address the relevant instance.

To make this actually transparent, use a redis client library that handles theses issues.
I have used the ioredis javascript library, which handles the MOVED redirection.

Monday, November 11, 2019

Using websocket on client and on server side

For many web applications, REST calls are the standard for client-server communication.
However, REST calls do not suit a more complex interactions, for example:

  • The server should notify the client when something changes
  • Using sticky sessions in a load balanced environment (Notice that you can use ingress to enforce stickiness, but forces usage of ingress, see this article)

A standard method to handle this requirement is using a web sockets.


The client creates a socket to the server, and leaves it open. Then, the server and the client can send frames over the socket whenever they need. This has performance implications, as the server must maintain an open socket for each running client.

Implementing web sockets in javascript is easy. Let review the implementation on the client side and on the server side. First, install the web socket library:


npm install --save socket.io-client


The Client Side

On the main client page, import the web socket library

const socketIo = require('socket.io-client')

Next, upon page load, or any other event that require the web socket establishment, connect to the server

const webSocket = socketIo.connect('http://my-server', {transports: ['websocket']})

To send a frame from the client over the web socket to the server, we specify the frame type, and the data

webSocket.emit('type1',{name:'Alice'})

And the last thing to do, is to handle the events triggered by the web socket. These include both error handling, and frames received from the server

webSocket.on('error', (e) => {
 console.error(e)
})
webSocket.on('disconnect', () => {
 console.log('web socket is disconnected')
})
webSocket.on('connect', () => {
 console.log('web socket is connected')
})
webSocket.on('type2', (dataFromServer) => {
 console.log('got type2 frame from the server', dataFromServer)
})


The Server Side

Create web socket listener on the server side

import socketIo from 'socket.io'
import express from 'express'
import http from 'http'

const app = express()
const server = http.Server(app)
const webSocketServer = socketIo(server, {transports: ['websocket']})
server.listen(8080)

Next, handle the socket events:

webSocketServer.on('connection', socket => {
  console.log(`A user connected with socket ${socket.id}`)

  socket.on('type1', dataFromClient => {
    logger.debug('got type1 frame from the client', dataFromClient)
  })
  
  socket.on('disconnect', () => {
    logger.debug('user socket disconnected')
  })
  
})

Summary

Javascript provides libraries that makes websocket implementation easy, and it take only few minutes to use it. Notice that to view the actual frame, you can use the chrome debugger network tab, see the Debugging WebSocket answer.

To view more details about the web socket library, check the socket-io home page.

Sunday, November 3, 2019

Create GO based server using GoLand on ubuntu


GO lang is one of the more successful programming languages, allowing performance oriented programming to be part of the code, while handling garbage collection (Unlike C++).
In this post we'll create a new GO based http server on ubuntu 18.04, and use JenBrains' GoLand IDE for the project development. Following are the steps for creating the project.

Install GoLang

Start by updating ubuntu packages:

sudo apt-get update

Then ,find the latest GoLang binary release in https://golang.org/dl/
and download it:

wget https://dl.google.com/go/go1.13.4.linux-amd64.tar.gz

Extract and move it to /usr/local

tar -xvf go1.13.4.linux-amd64.tar.gz
sudo mv go /usr/local

Install JetBrains GoLand



Download the lastest GoLand from: https://www.jetbrains.com/go/download/#section=linux

Unzip, and run the extracted folder the GoLand
./bin/goland.sh

Note:
If yo're working on ubuntu, use the GoLand to create desktop entry. This can be done only after the "Create New project" step. To create desktop entry, use the GoLand menu:
Tools, Create Desktop Entry...

Create New Project

Click on File, New project, and select Go Modules.
Type the location for the project, for example:

/home/my-user/projects/my-server

Click on the plus sign next to the GOROOT, select local, and select the go installation:

/usr/local/go

And click on the Create button.

The project explorer is displayed, and the go.mod file is created.
The go.mod includes the project name, and the go version. It is actually the GoLand running for us the `go mod init` command that had created this file.

Create the Server

Create new file named main.go, and enter the following code:

package main
import (
   "k8s.io/klog"   "net/http"   "strconv")

func main() {
   klog.Info("Server starting")

   mux := http.NewServeMux()
   mux.HandleFunc("/", helloHandler)
   mux.HandleFunc("/square", squareHandler)

   server := &http.Server{
      Addr:    ":8080",
      Handler: mux,
   }

   err := server.ListenAndServe()
   if err != nil {
      klog.Fatalf("Failed to start, error: %v", err)
   }
}

func squareHandler(w http.ResponseWriter, r *http.Request) {
   n, err := strconv.Atoi(r.URL.Query().Get("n"))
   if err != nil {
      w.WriteHeader(http.StatusInternalServerError)
      w.Write([]byte("conversion error"))
      return   }

   w.Write([]byte("square is " + strconv.Itoa(n*n)))
} 
func helloHandler(w http.ResponseWriter, r *http.Request) {
   w.Write([]byte("The server is up"))
}

This server is starting on port 8080, and has two handler.
The / path reply with "the server is up".
the /square path gets a query parameter named n, and returns the square of the number.

Run the server

Right click on the main.go file, and select:
Run `go build main.go`

This will both handle the `go get` command to update the dependencies in the go.mod file, as well as running the actual server.

Summary

This post includes the first steps in creation of a go based server, and running it using GoLand.
In future post we'll examine dockerization of the build, and tests.