Monday, March 30, 2020

NGINX Performance Tuning





The Stress Test



Last week I've run some stress tests for a product running on kubernetes.
We've have several nodes in the kubernetes, and the general product design was as follows.





The product is using NGINX as a reverse proxy to multiple micro services.
The micro services are using the REDIS as the persistence layer

Using a stress client application, which simulates multiple browsers, I've started sending transactions through the NGINX server.

In previous stress tests I've run, one of the micro services is prominent at its CPU/memory consumption, and hence the tuning focus is directed to the specific micro service.

However, in this case, I've received a unusual result. The first findings were:

  1. The transaction per second rate was much lower than the expected.
  2. All of the components CPU and memory consumption was low.

This was a mysterious issue. What's holding the transactions back?

In a desperate action, I've tried doubling the replicas for all of the deployments in the kubernetes cluster: The NGINX, the REDIS, and each of the micro services had now doubled replica count.

This did not change the transactions per second rate at all! 

After  investigation, I've located the culprit. 
It was NGINX.

Well, actually, NGINX was doing exactly what it was configured to do.
The point is that to get real high performing NGINX, it must be carefully configured.

In this post, I will review the changes that I've made to NGINX configuration, and how to make sure the configuration changes are indeed working as expected.


NGINX and the Ephemeral Ports


Each TCP connection to a TCP listener creates an entry in the ports table.
For example, upon startup of the NGINX listener, running netstat -a would show the following entries:


Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN     


Once a browser had connect to port 8080 of the NGINX listener, the following is added (again, this is shown using netstat -a):


Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN     
tcp        0      0 192.168.100.11:8080     192.168.100.35:51416    ESTABLISHED


However, once the client disconnects, it leave the connection in a TIME_WAIT state.
The goal of this state is to ignore future packets appearing on the same combination of the quadrille:
SRC host&port, DST host&port.


Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN     
tcp        0      0 192.168.100.11:8080     192.168.100.35:51416    TIME_WAIT


The TIME_WAIT state is kept for 2 minutes (depends on the OS configuration), and the the entry is removed.


Creation of a new entry in the ports table has several implications:

  1. The ports table is limited in its size. The actual limit depends on the OS configuration, but it is an order of 30K entries. In case reaching to the limit, new connection would be rejected by the server.
  2. Creating a new socket has an high footprint cost. A rough estimation is 10ms, but it varies depending on the packet round-trip time, and on the server resources. 


How to Investigate NGINX Pod Port Table



When connecting to the NGINX, using the command:


kubectl exec -it NGINX_POD_NAME bash


And then trying to run the netstat command, you will find that netstat is not installed.
To install it, create your own docker image, based on the NGINX image, and install the netstat.


FROM nginx
RUN apt-get update
RUN apt-get install -y net-tools


And the you can connect, and run netstat in the NGINX pod.


Configuring NGINX 



To prevent the TIME_WAIT connections, we need to configure NGINX to use keep-alive, that is, to ask NGINX to reuse connections, and not to close them whenever possible.




Notice that this should be configured both for the client connecting with incoming connections, as well as for the outgoing connections to the micro-services.


First, for the client side connection, add the following configuration in nginx.conf:


http {
    ...
    keepalive_requests 1000000;
    keepalive_timeout 300s;


Notice that most clients would probably have only few connections, so the keepalive_requests limit seems extremely high. The reason for the high limit is to enable the stress client to reuse the sam connection as much as possible, while simulating multiple browsers clients.


Next, for the outgoing proxy side to the micro-services, add the following configuration in nginx.conf:


http {
  
  ...
  
  upstream micro-service-1 {
    server micro-service-1.example.com;
    keepalive 1000;
    keepalive_requests 1000000;
    keepalive_timeout 300s;
  }

  ...
  
  server {

  ...
  
    location /my-redirect/micro-service-1 {
      proxy_pass http://micro-service-1;

      proxy_set_header      Connection "";
      proxy_ignore_headers    "Cache-Control" "Expires";
      proxy_buffers         32 4m;
      proxy_busy_buffers_size   25m;
      proxy_buffer_size       512k;
      client_max_body_size    10m;
      client_body_buffer_size   4m;
      proxy_connect_timeout     300;
      proxy_read_timeout      300;
      proxy_send_timeout      300;
      proxy_intercept_errors    off;
      proxy_http_version      1.1;
    }

  ...
  


Note the following:

  • We configure the micro-service as an upstream entry, where we can configure it to use keepalive
  • We configure the relevant URL location to redirect to the upstream entry, and to use HTTP 1.1. In addition, we add some tuning configuration for buffers sizes and timeouts.



Special Case - Auth Request



NGINX include a support for an auth request. This means that upon a client request to access a specific URL, NGINX would first forward the request to another micro service, to approve the access. However, as specified in this bug, NGINX would disable the keep-alive configuration for the auth request.

To overcome this, use auth request that is sent by NGINX JavaScript. See an example, in the following configuration in nginx.conf:


http {
  
  load_module modules/ngx_http_js_module.so;
  load_module modules/ngx_stream_js_module.so;
  js_include /etc/nginx/auth.js;
  
  ...
  
  upstream authenticate-micro-service-1 {
   server authenticate-micro-service-1.example.com;
   keepalive 1000;
   keepalive_requests 1000000;
   keepalive_timeout 300s;
  }

  ...
  
  server {

   ...

   location /authenticate-micro-service-1 {
    proxy_pass http://authenticate-micro-service-1;
    proxy_http_version  1.1;
   }

   location /javascript-auth {
    internal;
    js_content validate;
    proxy_http_version  1.1;
   }

   location /secured-access {
    auth_request   /javascript-auth;
    proxy_http_version  1.1;
   }

   ...


Where is auth.js is the following:


function validate(r){

    function done(res){
        var validateResponse = JSON.parse(res.responseBody);
        if validateResponse.ok {
            r.return(200);
        } else {
            r.return(500);
        }
    }

    r.subrequest("/authenticate-micro-service-1", r.variables.args, done);
}


Final Notes



After the NGINX configuration changes, things started moving, the transactions per seconds rate increase, and I was able to start tuning the microservices that were now getting high transactions rate, but this is another story, for another post.

Liked this post? Leave a comment.

Thursday, March 26, 2020

RPC Server using GO and RabbitMQ


You have implemented a product base on kubernetes and GO.
Great.

But what if you have an algorithm that has a heavy computation step?
Suppose you can split the computation step to run in parallel?

One method is to use a local thread pool, and send to each thread a part of the computation.
Would it solve the problem?
Well, yes, as long as the tasks can be completed in a timely manner within a single machine.
However, in case you need more than a single machine, local thread pool is not your solution.

To distributes the computation step tasks among multiple servers, you can use workers pods in kubernetes. Each pod consumes tasks of the computation step, and hence the work can be distributed among multiple machines.

So the basic idea is as follows:




In this post I will present an implementation of this RPC server using GO and RabbitMQ. For details of deploying Rabbit MQ on kubernetes, see this post.

The RPC server is base on a client, that is running on a central location. The client produces computational tasks requests that are sent to a requests queue. The RPC server consumers are reading the requests, processing them, and then return the results to the response queue. The central client awaits for all of the responses from the RPC server consumers. This is presented in the following diagram:





Notice that the RPC server consumer can run as a single pod, or as multiple pods to enable distribution of the work among multiple servers.


Example of Usage

The following code is an example of usage of the RPC client/server.
It can be run using "server" as argument and using "client" as argument.

When started as client, it produces 2 tasks, and waits for 2 results on the results queue.

When started as server, it waits for any tasks, and simulate consume and processing to return a result per each task.


package main
import (
   "github.com/alonana/rpc/rpc"
   "github.com/streadway/amqp"
   "os"
   "time"
)

func main() {
   rabbitConnection, err := amqp.Dial("amqp://user:pass@127.0.0.1:5672/")
   if err != nil {
      panic(err)
   }

   if os.Args[1] == "server" {
      runServer(rabbitConnection)
   } else {
      runClient(rabbitConnection)
   }
}

func runClient(connection *amqp.Connection) {
   c := rpc.CreateClient(connection)
   c.Start()
   c.Produce([]byte("my-first-request"))
   c.Produce([]byte("my-second-request"))
   responses := c.Wait()
   for i := 0; i < len(responses); i++ {
      response := string(responses[i])
      println(response)
   }
}

func runServer(connection *amqp.Connection) {
   s := rpc.CreateServer(connection, consumer)
   s.Start()
   time.Sleep(time.Hour)
}

func consumer(input rpc.Bytes) rpc.Bytes {
   request := string(input)
   return []byte("I got the request " + request)
}


The Consume Queue


Both the client and the server are consuming messages from a queue. The server is consuming the messages from the requests queue, and the client is consuming the messages from the responses queue. Hence a shared code is included to handle consuming of messages from a queue.



package rpc
import (
   "github.com/pkg/errors"
   "github.com/streadway/amqp"
   "sync"
)

type Bytes []byte
type QueueConsumer func(Bytes)

type ConsumeQueue struct {
   messages       <-chan amqp.Delivery
   channel        *amqp.Channel
   closeChannel   chan bool
   closeWaitGroup sync.WaitGroup
   consumer       QueueConsumer
   consumed       int
   queueName      string
}

func createConsumeQueue(connection *amqp.Connection, queueName string, consumer QueueConsumer) *ConsumeQueue {
   q := ConsumeQueue{
      consumer:     consumer,
      closeChannel: make(chan bool),
      queueName:    queueName,
   }

   channel, err := connection.Channel()
   if err != nil {
      panic(err)
   }

   err = channel.Qos(1, 0, false)
   if err != nil {
      panic(err)
   }
   q.channel = channel
   _, err = channel.QueueDeclare(queueName, false, false, false, false, nil)
   if err != nil {
      panic(err)
   }

   q.messages, err = q.channel.Consume(q.queueName, q.queueName, false, false, false, false, nil)
   if err != nil {
      panic(err)
   }

   q.closeWaitGroup.Add(1)
   go q.consumeLoop()
   return &q
}

func (q *ConsumeQueue) stop() error {
   err := q.channel.Cancel(q.queueName, false)
   if err != nil {
      return errors.Errorf("cancel consumer failed: %v", err)
   }
   q.closeChannel <- true   q.closeWaitGroup.Wait()

   q.channel.Close()
   return nil
}

func (q *ConsumeQueue) consumeLoop() {
   for {
      select {
      case <-q.closeChannel:
         q.closeWaitGroup.Done()
         return      case message := <-q.messages:
         q.consume(message)
         break      
       }
   }
}

func (q *ConsumeQueue) consume(message amqp.Delivery) {
   q.consumed++
   q.consumer(message.Body)

   err := message.Ack(false)
   if err != nil {
      panic(err)
   }
}


The Server


The server will run forever, waiting for messages in the requests queue, running the consumer to process them, and return results to the responses queue.


package rpc
import (
   "github.com/streadway/amqp"
)

type ServerConsumer func(Bytes) Bytes
type Server struct {
   rabbitConnection *amqp.Connection
   rabbitChannel    *amqp.Channel
   consumeQueue     *ConsumeQueue
   consumer         ServerConsumer
}

func CreateServer(rabbitConnection *amqp.Connection, consumer ServerConsumer) *Server {
   return &Server{
      rabbitConnection: rabbitConnection,
      consumer:         consumer,
   }
}

func (s *Server) Start() {
   var err error   channel, err := s.rabbitConnection.Channel()
   if err != nil {
      panic(err)
   }

   err = channel.Qos(1, 0, false)
   if err != nil {
      panic(err)
   }
   s.rabbitChannel = channel
   s.consumeQueue = createConsumeQueue(s.rabbitConnection, queueNameRequests, s.consumerWrapper)
}

func (s *Server) consumerWrapper(data Bytes) {
   result := s.consumer(data)

   publishing := amqp.Publishing{
      ContentType: "text/plain",
      Body:        result,
   }

   err := s.rabbitChannel.Publish("", queueNameResponses, false, false, publishing)
   if err != nil {
      panic(err)
   }
}


The Client


Lastly, the client sends the produces tasks to the requests queue, and wait until all responses are received into the responses queue.


package rpc
import (
   "github.com/streadway/amqp"
   "math/rand"
   "strconv"
)

type Client struct {
   connection    *amqp.Connection
   rabbitChannel *amqp.Channel
   consumeQueue  *ConsumeQueue
   requests      int
   channel       chan Bytes
}

func CreateClient(connection *amqp.Connection) *Client {

   return &Client{
      connection: connection,
      channel:    make(chan Bytes),
   }
}

func (c *Client) Start() {
   var err error   channel, err := c.connection.Channel()
   if err != nil {
      panic(err)
   }

   err = channel.Qos(1, 0, false)
   if err != nil {
      panic(err)
   }
   c.rabbitChannel = channel
   c.consumeQueue = createConsumeQueue(c.connection, queueNameResponses, c.consumerWrapper)
}

func (c *Client) Produce(request Bytes) {
   publishing := amqp.Publishing{
      ContentType: "text/plain",
      Body:        request,
   }

   c.requests++
   err := c.rabbitChannel.Publish("", queueNameRequests, false, false, publishing)
   if err != nil {
      panic(err)
   }
}

func (c *Client) Wait() []Bytes {
   var bytesResponses []Bytes   for i := 0; i < c.requests; i++ {
      response := <-c.channel      bytesResponses = append(bytesResponses, response)
   }

   c.rabbitChannel.Close()

   err := c.consumeQueue.stop()
   if err != nil {
      panic(err)
   }

   return bytesResponses}

func (c *Client) consumerWrapper(data Bytes) {
   c.channel <- data
}


Final Notes


In this post we have presented a method to distribute work among multiple machines using GO and Rabbit MQ. This is only a basic implementation example. In case of need it should be expanded to handle errors on the client and on the server side.
In addition, each request could include a unique ID to ensure that we're only handling the current iteration request/response.

Liked this post? Leave a comment.


Thursday, March 19, 2020

Deploy RabbitMQ Cluster on Kubernetes




In this post we will deploy minimal RabbitMQ cluster on kubernetes.

RabbitMQ is a lightweight message broker widely used in the industry.
I have used it as an RPC server in our product, but you can use it for many other patterns, such as: simple queue, work queues, publish subscribe.

In this post, we will not use persistence for the RabbitMQ, but in case of need, you can add it using kubernetes Persistence Volumes.

To deploy RabbitMQ there are 2 steps:

  1. Prepare a RabbitMQ image
  2. Create kubernetes resources

Prepare a RabbitMQ Image


The image is based on the following Dockerfile:

FROM rabbitmq:3.8.3
COPY files /
ENTRYPOINT /entrypoint.sh


As you can notice, we're only enriching the base RabbitMQ image with several additional files from the files sub folder.

The added files include the entrypoint.sh, which is, as it named implies, the docker image entry point.

entrypoint.sh:

#!/usr/bin/env bash
echo myClusterPassword > /var/lib/rabbitmq/.erlang.cookie
chmod 700 /var/lib/rabbitmq/.erlang.cookie

/init.sh &

exec rabbitmq-server


The entry point creates the erlang cookie, which is used for cluster intercommunication.
Then, it runs the init script in the background, and the runs the RabbitMQ server.

The init.sh script purpose is to configure the RabbitMQ once it is started.

init.sh:

#!/usr/bin/env bash

until rabbitmqctl --erlang-cookie myClusterPassword node_health_check > /tmp/rabbit_health_check 2>&1
do
    sleep 1
done

rabbitmqctl --erlang-cookie myClusterPassword set_policy ha-all "" '{"ha-mode":"all", "ha-sync-mode": "automatic"}'

rabbitmqctl add_user myUser myPassword
rabbitmqctl set_user_tags myUser administrator
rabbitmqctl set_permissions -p / myUser ".*" ".*" ".*"

The init script has the following logic:

  • Wait for the RabbitMQ to start
  • Configure auto sync of all queues in the cluster
  • Add myUser as an administrator

An addition file is added: probe.sh. This will be later used for kubernetes probes.

probe.sh

#!/usr/bin/env bash
rabbitmqctl status


While we could directly call the the status command from the kubernetes StatefulSet, In most cases you would find yourself adding additional logic to the probe, hence, we are using a dedicated script.

Create Kubernetes Resources


To deploy RabbitMQ on kubernetes, we will use the RabbitMQ kubernetes plugin. This plugin requires permissions to list the endpoints of the RabbitMQ service.

We will start by configuration service account with the required permissions.


---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: rabbit-role
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: rabbit-role-binding
subjects:
  - kind: ServiceAccount
    name: rabbit-service-account
    namespace: default
roleRef:
  kind: ClusterRole
  name: rabbit-role
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rabbit-service-account
  namespace: default


Next we'll create a ConfigMap with the RabbitMQ configuration files: the rabbitmq.conf, and the enabled_plugins.

Notice the RabbitMQ configuration file includes the name of the discovery service, which we will create next.

apiVersion: v1
kind: ConfigMap
metadata:
  name: rabbit-config
data:
  rabbitmq.conf: |-
    cluster_formation.peer_discovery_backend  = rabbit_peer_discovery_k8s
    cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
    cluster_formation.k8s.address_type = hostname
    cluster_formation.k8s.hostname_suffix = .rabbit-discovery-service.default.svc.cluster.local
    cluster_formation.k8s.service_name = rabbit-discovery-service
    cluster_formation.node_cleanup.interval = 10
    cluster_formation.node_cleanup.only_log_warning = true
    cluster_partition_handling = autoheal
    queue_master_locator=min-masters
  enabled_plugins: |-
    [rabbitmq_management,rabbitmq_peer_discovery_k8s].


The RabbitMQ deployment requires two services.
The standard service is used to access the RabbitMQ instance.
In addition, we're adding a headless service - the discovery service. This service is used by RabbitMQ kubernetes plugin to find the instances.


---
apiVersion: v1
kind: Service
metadata:
  name: rabbit-service
spec:
  selector:
    configid: rabbit-container
  type: NodePort
  ports:
    - port: 80
      targetPort: 5672
      name: amqp
      protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: rabbit-discovery-service
spec:
  selector:
    configid: rabbit-container
  clusterIP: None
  publishNotReadyAddresses: true
  ports:
      - port: 5672
        name: amqp
      - port: 15672
        name: http


The last resource is the StatefulSet. We will create 3 pods for the service, to ensure a minimal valid cluster quorum.
Notice that the StatefulSet includes a liveness and a readiness probes pointing the the probe.sh script that we've previously created.


apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: rabbit-statefulset
spec:
  serviceName: rabbit-discovery-service
  replicas: 3
  selector:
    matchLabels:
      configid: rabbit-container
  template:
    metadata:
      labels:
        configid: rabbit-container        
    spec:
      serviceAccountName: rabbit-service-account
      terminationGracePeriodSeconds: 10
      containers:
        - name: rabbit
          image: replace-with-our-rabbit-image-name:latest
          imagePullPolicy: IfNotPresent
          env:
            - name: HOSTNAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: RABBITMQ_USE_LONGNAME
              value: "true"
            - name: RABBITMQ_NODENAME
              value: "rabbit@$(HOSTNAME).rabbit-discovery-service.default.svc.cluster.local"
          volumeMounts:
            - name: rabbit-config
              mountPath: /etc/rabbitmq/enabled_plugins
              subPath: enabled_plugins
            - name: rabbit-config
              mountPath: /etc/rabbitmq/rabbitmq.conf
              subPath: rabbitmq.conf
          livenessProbe:
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 3
            initialDelaySeconds: 15
            periodSeconds: 10
            exec:
              command:
                - /bin/sh
                - -c
                - /probe.sh
          readinessProbe:
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 1
            initialDelaySeconds: 15
            periodSeconds: 10
            exec:
              command:
                - /bin/sh
                - -c
                - /probe.sh
      volumes:
        - name: rabbit-config
          configMap:
            name: rabbit-config



Final Notes


In this post we've deployed a RabbitMQ cluster in kubernetes. You can connect to the running pods, and execute management commands, such as:

kubectl exec -it rabbit-statefulset-0 rabbitmqctl list_queues

That's all for this post.
Liked it? Leave a comment.


Wednesday, March 11, 2020

Go Scheduler





What if you want to have a scheduler in a GO application?
At first glance it seems that you have a builtin solution that is provided by the GO tickers, as shown in the GO by example page.

But this solution is naive.
It does not avoid parallel run of the task, so in case the scheduled task is delayed for more than the schedule interval, another invocation of the task will run in parallel.

For example:
Task X is configured to run every 10 seconds.
But due to a system stress, it runs for 15 seconds.
The result would be as following:

00:00:00 - The application starts
00:00:10 - Task X is starting in a thread#1. We have 1 running task
00:00:20 - Task X is starting in a thread#2. We have 2 running tasks
00:00:25 - Task X is is complete in a thread#1. We have 1 running task
00:00:30 - Task X is starting in a thread#3. We have 2 running tasks

In real life, parallel invocation of tasks would cause slower performance, and hence might cause accumulation of more than just 2 tasks.

Example for Scheduler Usage


The solution is to use a scheduler that starts a new task only after the previous was completed.
A simple usage of such scheduler is:


package main
import (
   "fmt"
   "math/rand"
   "scheduler-demo/scheduler"
   "time"
)

func main() {
   s := scheduler.Create(myScheduledItem, time.Second)
   s.Start()

   time.Sleep(10 * time.Second)
   s.Stop()
}

func myScheduledItem() {
   format := "2006-01-02 15:04:05.000"
   sleepTime := time.Duration(rand.Intn(2000)) * time.Millisecond
   fmt.Printf("start running at %v, sleeping %v\n", time.Now().UTC().Format(format), sleepTime)
   time.Sleep(sleepTime)
   fmt.Printf("end running at %v\n===\n", time.Now().UTC().Format(format))
}


This usage schedules a task to run every second, but the task lasts up to 2 seconds.


The Scheduler Library


The scheduler should avoid parallel invocations of this task.
This is handled in the scheduler library:


package scheduler
import (
   "sync"
   "time"
)

type Worker func()


type Scheduler struct {
   interval time.Duration
   waitGroup   sync.WaitGroup
   worker      Worker
   stopChannel chan int
}

func Create( worker Worker,interval time.Duration) *Scheduler {
   return &Scheduler{
      worker:      worker,
      stopChannel: make(chan int, 1),
      interval:interval,
   }
}

func (s *Scheduler) Start() {
   s.waitGroup.Add(1)
   go s.schedule()
}

func (s *Scheduler) Stop() {
   s.stopChannel <- 0
   s.waitGroup.Wait()
}

func (s *Scheduler) schedule() {
   timer := time.NewTimer(time.Nanosecond)
   for {
      select {
      case <-timer.C:
         s.runWorker(timer)

      case <-s.stopChannel:
         s.waitGroup.Done()
         return
      }
   }
}

func (s *Scheduler) runWorker(timer *time.Timer) {
   startTime := time.Now()
   s.worker()
   passedTime := time.Now().Sub(startTime)
   waitTime := s.interval - passedTime
   if waitTime < 0 {
      waitTime = time.Nanosecond
   }
   timer.Reset(waitTime)
}


We can see that the scheduler starts the next task only after the current task is complete.
It will try to tune the run toward 1 second since the last start time, but in case of delays, it will wait longer. An example of run output is below.


start running at 2020-03-12 05:17:17.684, sleeping 81ms
end running at 2020-03-12 05:17:17.765
===
start running at 2020-03-12 05:17:18.684, sleeping 1.887s
end running at 2020-03-12 05:17:20.571
===
start running at 2020-03-12 05:17:20.571, sleeping 1.847s
end running at 2020-03-12 05:17:22.418
===
start running at 2020-03-12 05:17:22.418, sleeping 59ms
end running at 2020-03-12 05:17:22.478
===
start running at 2020-03-12 05:17:23.419, sleeping 81ms
end running at 2020-03-12 05:17:23.500
===
start running at 2020-03-12 05:17:24.419, sleeping 1.318s
end running at 2020-03-12 05:17:25.737
===
start running at 2020-03-12 05:17:25.737, sleeping 425ms
end running at 2020-03-12 05:17:26.162
===
start running at 2020-03-12 05:17:26.737, sleeping 540ms
end running at 2020-03-12 05:17:27.277
===


Final Notes


We have shown a simple scheduler, which avoids parallel invocations of tasks.
The scheduler is only a simple example, and can be further improved by allowing parallel task, while limiting the amount of parallel tasks.





Wednesday, March 4, 2020

Using CSS Transitions for Animation


In this post we will present a simple method to animate items on the GUI using CSS Transitions.
A CSS transition (see the specifications in w3schools site) allows to smoothly change CSS properties over a specified period. To show a demo, we will use the demo application that I've previously described in the React Bootstrap with NavBar React Router and Styled Components post.

We have the following page:





And we want that upon click, the width of the text to change from 100% to 50%, to this view:



But we want it to animate the changes, as shown in this video:





The enable this animation, we first update the page to send property to the Text element, whether it is using full width, or only half.
Notice that we use a react hook to flip the state of the text.


import React, {useState} from 'react'
import {Text} from './style'
function Page1() {
  const [full, setFull] = useState(true)
  return (
    <Text full={full} onClick={() => setFull(!full)}>
      Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore
      magna aliqua. Mauris augue neque gravida in fermentum et sollicitudin. Viverra ipsum nunc aliquet bibendum enim
      facilisis gravida. Neque gravida in fermentum et sollicitudin ac orci phasellus egestas. Tortor consequat id porta
      nibh. Risus sed vulputate odio ut. Mollis nunc sed id semper risus in hendrerit gravida. Nulla facilisi cras
      fermentum odio. Feugiat pretium nibh ipsum consequat nisl vel pretium lectus quam. Mi ipsum faucibus vitae
      aliquet. A lacus vestibulum sed arcu non. Laoreet id donec ultrices tincidunt arcu non sodales neque. Condimentum
      mattis pellentesque id nibh tortor id aliquet. Mauris pellentesque pulvinar pellentesque habitant morbi. At
      elementum eu facilisis sed odio.
    </Text>
  )
}

export default Page1


Next, we update the style.js to get the full property, and set the width to 50% or 100% based on the value of the full property.
This is performed in the styled components file: style.js


import styled from 'styled-components'
export const Text = styled.div`
  height: 100%;  
  color: aqua;  
  background-color: darkkhaki;
  width ${props => props.full ? '100%' : '50%'};
  transition: width 1s;
`


The real magic is in the CSS transition:

transition: width 1s;

where we specify that the width should be updated smoothly during a one second period.

The transition can act on many other CSS properties.
For example: it can act on transform property.
The following would cause an element to turn sideways in a smooth animated method:

transform: rotate(${props => props.expanded ? '90deg' : '0deg'});
transition: transform 400ms;

Final Words


CSS transition is a great method to animate items in the GUI, and it a great substitute for CSS animation. Combined with react and styled-components makes animation a child game.

Liked this post? Leave a comment...