Full Blog TOC

Full Blog Table Of Content with Keywords Available HERE

Sunday, August 30, 2020

Using ILM for Filebeat Indices Retention

 


In this post we will review how to configure the Filebeat to use ILM for the logs retention management.

Filebeat collects the log files records, and sends them directly to the ElasticSearch. ElasticSearch saves the log records in an index. As time passes, the index size increases, and unless handled, we will eventually run out of disk space.

To solve this issue, we use the ElasticSearch Index Lifecycle Manager, the ILM


Using ILM for Dummies


First, Filebeat is indexing the log messages into an alias, and not directly into the ElasticSearch index. The ElasticSearch route the alias into a new created index.




Once a specific threshold, such as time or size, is breached, ElasticSearch triggers a redirection of the alias to a new index.



Eventually, after another time threshold is breached, ElasticSearch deletes the old index.


Configuring the FileBeat


To configure this behavior, we update the filebeat.yml file with the ILM configuration:


setup.ilm:
enabled: true
policy_name: "filebeat"
rollover_alias: "filebeat"
pattern: "{now/d}-000001"
policy_file: /etc/ilm.json
overwrite: true


And we add the ilm.json to configure the ILM policy:


{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_age": "1h"
}
}
},
"delete": {
"min_age": "4h",
"actions": {
"delete": {}
}
}
}
}
}


In this example, we rollover (change the alias target index) after one hour, and delete the index after four hours.


Final Notes

In this example we have shown triggering ILM using a time period trigger, but ILM can be configured also by a size trigger. In addition, ILM can have more phases, in addition to hot and delete. See the ElasticSearch documentation for more details.


Useful info: To check the status ILM of each index, using the ElasticSearch ILM explain API.

Wednesday, August 26, 2020

Authentication Service

 

In this post we will review a general authentication service: why is it required, and how is it used.

In the old times, when an organization has only one or very few internal systems, each system had its own users repository. The organization IT team had configured the users on each related system, using the related system management tools.





As the technology penetrated more into wider areas, the amount of the organization internal systems had increased, and so upon each change in the organization employees, the organization IT team had to manually update the users repository in each system. 

Not only this, but the complexity of the user authentication had increased as a mean to protect from malicious account takeover. Other factors were added to the authentication requirements. This includes multi factor authentication such as an SMS to the employee mobile phone,  and authentication applications on the employee mobile phone,. In addition password strength restrictions, such as minimum length, passwords history, and password complexity should be enforced.

Due to these changes and requirements, the organization IT could not longer manage the organization internal systems in a timely fashion. The solution to this issue was the authentication service.

The authentication service is a central system where the users repository is held. It manages the security requirement for the users: password strength, multi factor authentication and more.




The internal systems no longer hold a users repository, but instead, access the authentication service to verify the user login.


The following diagram explains the steps of the a user login when an authentication service is used.





Step 1: Login without a Token. The end user accesses system A. 

Step 2: Redirect. System A identifies there is no token in the request, and hence redirects the end user to the authentication service.

Step 3: Login. The end user logins into the authentication service. The authentication service enforces all security measures required, such as MFA, and password strength.

Step 4: Token. Once the login is success, the authentication service generates a token for the user. This token is kept in the authentication service for a pre configured limited time. The token is returned to the end user.

Step 5: Login with a Token. The end user accesses system A, but this time, the token is send in the request header.

Step 6: Validate Token. System A validates the token with the authentication service. 

Step 7: User. The authentication service, finds the token that was previously saved, and returns the actual user name to system A.



Final Notes

In this post we have reviewed the operation of a general authentication service. 

Once an authentication service is integrated into the organization, the next step is to provide a single sign on capability. This is possible since the token that was created as part of a login to one organizational system, can be used by another system, without a need to login again.



Wednesday, August 12, 2020

Monitoring Redis Commands

 


In this post we will review a method of monitoring Redis commands usage, using a small GO based utility.

When you have a complex micro-services system that is heavily using a Redis cluster, you will need, sooner or later, to monitor the Redis usage. Each micro-service uses a subset of the many Redis APIs: GET, SET, HINCR, SMEMBER, and many more. You might find yourself lost.

Which keys are the cause for the top of the CPU stress on the Redis? 

Which commands are used to access each key?


Redis does not leave you empty handed, as it provides a great tool: The Redis MONITOR command.

Running the Redis MONITOR command prints out every command that is processed by the Redis server, for example:


1597239055.206779 [0 192.168.100.9:58908] "hexists" "book-0001-2020-08-12T13:30:54" "AddAccount"
1597239055.213690 [0 192.168.100.9:58908] "set" "author-0001" "1597239054"
1597239056.202888 [0 192.168.100.9:58908] "hexists" "book-0001-2020-08-12T13:30:55" "AddAccount"
1597239056.206297 [0 192.168.100.9:58908] "set" "customer-0001" "1597239055"
...


That's a good start, but the real value is only when you aggregate this output to create a summary report of the APIs and keys.

The common usage of Redis is to have a prefix for all types of the keys used for the same purpose, for example, if we keep books information in the Redis, we will probably use keys in the format of:

BOOK-<id>

So, let's groups all Redis APIs based on the prefix. First, let configure the keys prefixes:


var prefixes = []string{
"store-",
"book-",
"customer-",
"author-",
}


Next, we read the output from the Redis MONITOR command, and analyze it:


type Stats struct {
access int
commands map[string]int
}

type Scanner struct {
keys map[string]Stats
lines int
lastPrint time.Time
}


func main() {
s := Scanner{
keys: make(map[string]Stats),
lastPrint: time.Now(),
}
s.Scan()
}

func (s *Scanner) Scan() {
reader := bufio.NewReader(os.Stdin)
for {
line, err := reader.ReadString('\n')
if err != nil {
if err == io.EOF {
break
}
panic(err)
}
err = s.parseLine(line)
if err != nil {
panic(err)
}
}

s.summary()
}

func (s *Scanner) parseLine(line string) error {
if s.lines%10000 == 0 {
fmt.Printf("%v commands processed\r\n", s.lines)
}
if time.Since(s.lastPrint) > time.Minute {
s.lastPrint = time.Now()
s.summary()
}
s.lines++
if strings.HasPrefix(line, "OK") || strings.TrimSpace(line) == "" {
return nil
}
r := regexp.MustCompile("\\S+ \\[.*] \"([a-z]*)\" ?\"?(.*)?\"?")
located := r.FindStringSubmatch(line)
if located == nil {
return fmt.Errorf("unable to parse line: %v", line)
}
command := located[1]
key := located[2]

prefix := getPrefix(key)

stats := s.keys[prefix]
stats.access++
if stats.commands == nil {
stats.commands = make(map[string]int)
}
stats.commands[command]++
s.keys[prefix] = stats
return nil
}

func getPrefix(key string) string {
for _, prefix := range prefixes {
if strings.HasPrefix(key, prefix) {
return prefix
}
}
panic(fmt.Errorf("missing prefix for key %v", key))
}

func (s *Scanner) summary() {
output := "\n\nSummary\n\r\n"

total := 0
for _, stats := range s.keys {
total += stats.access
}

sortedKeys := make([]string, 0)
for key := range s.keys {
sortedKeys = append(sortedKeys, key)
}

sort.Slice(sortedKeys, func(i, j int) bool {
return s.keys[sortedKeys[i]].access > s.keys[sortedKeys[j]].access
})

for _, key := range sortedKeys {
stats := s.keys[key]
percent := 100 * stats.access / total
output += fmt.Sprintf("%v %v%% %v\r\n", key, percent, stats.commands)
}
fmt.Printf(output)
}


To run the utility, we redirect the output if the Redis MONITOR command to the utility STDIN:


kubectl exec -it redis-statefulset-0 -- redis-cli monitor | ./redismonitor


And we get a report of each key access by its prefix, and the list of APIs used:


book- 31% map[exists:14548 hgetall:84258 hincrby:41242 hset:43647]
store- 10% map[hgetall:29097 hincrby:29096]
customer- 9% map[hdel:14548 hgetall:26694 hset:14548]
...



Final Notes


Using the report, we can conclude what are the keys who are mostly used, and which Redis APIs are used to access each key. Next we can jump into the related code that accesses the keys, and review it.

A small change in the code that accesses the top keys, would result in a high impact on the product footprint both in the Redis performance, and both in the application itself.

Monday, August 10, 2020

Monitoring NGINX on Kubernetes using Prometheus

 


In this post we will review how to monitor NGINX server on kubernetes using Prometheus and Grafana.


In general, Prometheus monitors any kubernetes deployment that specified the set of relevant Prometheus annotations (see below). Once prometheus marks a pod for scraping, it will send requests to the pod, and update the results in its time series database.


The problem is that NGINX and Prometheus speak different languages. 


NGINX uses the stub status page, which upon access returns the following output:


root@nginx-deployment-64bf95d447-gs65z:/# curl 127.0.0.1:8080/stub_status
Active connections: 2 
server accepts handled requests
 2132 2132 100501 
Reading: 0 Writing: 1 Waiting: 1 


while the Prometheus expects it own format:


root@nginx-deployment-64bf95d447-gs65z:/# curl 127.0.0.1:9113/metrics
# HELP nginx_connections_accepted Accepted client connections
# TYPE nginx_connections_accepted counter
nginx_connections_accepted 2174
# HELP nginx_connections_active Active client connections
# TYPE nginx_connections_active gauge
nginx_connections_active 1


To enable Prometheus scraping for NGINX, we we an exporter, that provides the NGINX statistics in the Prometheus format. We will create a deployment that includes 2 containers: The NGINX container, and the exporter container. The exporter samples NGINX to get the updated statistics, and returns it to the Prometheus in its expected format.


Let's start with the NGINX configuration:


apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.conf: |-
user nginx;
worker_processes 10;

error_log /dev/stdout warn;
pid /var/run/nginx.pid;

events {
worker_connections 10240;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

server {
listen 8080;
server_name localhost;

location /stub_status {
allow 127.0.0.1;
deny all;
stub_status on;
}

location / {
return 200 'NGINX is alive';
}
}
}



We have added to the NGINX configuration file the /stub_status location, which exposes the NGINX statistics. In addition we have blocked all IPs to access to this location, expect for the localhost IP.


Next, we create the deployment:


apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
configid: nginx
template:
metadata:
labels:
configid: nginx
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "9113"
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: exporter
image: nginx/nginx-prometheus-exporter:0.8.0
args:
- -nginx.scrape-uri=http://127.0.0.1:8080/stub_status
volumes:
- name: nginx-config
configMap:
name: nginx-config


Notice some of the used deployment configuration:

  • Using the annotations, we mark the deployment to be scraped by Prometheus, but we specify the port 9113, which is not the NGINX port, but the exporter port.

  • We include 2 containers: NGINX and exporter.

  • The exporter is configured to access the stub_status page that the NGINX exposes


Now we can visualize this in Grafana.



Final Notes


We have reviewed NGINX monitoring using Prometheus scraping and an exporter. Additional metrics are available from the exporter. Use the Prometheus GUI to display all of them, and display additional metrics, such as the amount of the active connections, NGINX aliveness, and more.


Wednesday, August 5, 2020

Guidelines for a Redis Auto Scale Operator on Kubernetes




In this post we will review the a design of a Redis auto scale operator on kubernetes.


At this time, I will not include code in this post, only the guidelines.
I hope that in the future I will have time to create an open source project based on the project I've implemented for a specific implementation. But still, if you need to create a Redis operator, this is a great start point.

Before digging into this article, I recommend reading other posts I've created for Redis:

The Redis auto scale operator monitors the Redis cluster and the application. 
It keeps track of the actual amount of the Redis nodes, and the required amount of Redis nodes, and fixes the first one toward the second one.

A Redis cluster is based on a group of masters nodes, that uses sharding to handle the data.
Each master usually has one or more slave nodes to share the read load, and to take control in case the master is down. A common practice is to use a master-slave pair. Note that the actual master-slave recovery is handled entirely by the Redis itself, while the Redis auto scale operator handles add and removal if master-slave pairs.

The Redis auto scale operator is scheduled to run every constant interval, for example, every 1 minute.
To avoid throttling, the Redis auto scale operator does not perform any actions if it had started soon after the last action. So we use a "grace period" of 3 minutes after a change of the Redis cluster, before making an additional change.

Then the Redis auto scale operator performs 2 steps: Cluster Repair, and Cluster Scaling.


Step 1: Cluster Repair


The Redis cluster must be in a stable state before any scale operation is done. The Redis cluster might become unstable for example: in case the Redis auto scale operator had problems in the last scale operation, or just due to a Redis cluster internal problems.

Listed below are the items handled as part of the cluster repair.


Repair A: Delete redundant replicas


The Redis auto scale operator fetch the number of known Redis cluster nodes using the CLUSTER INFO CLI. Then it checks the amount of replicas of the Redis kubernetes StatefulSet.
In case the amount of replicas is higher than the amount of the nodes, the redundant replicas are deleted by update of the StatefulSet replicas specification. Then the Redis auto scale operator waits for the pods to terminate.


Repair B: Force stable for a migrating slot


In some cases, a slot is "stuck" in migrating. This is a Redis internal issue that occurs once in a while during a Redis cluster rebalance (see SETSLOT for details). The Redis cluster sets a slot as migrating from a source node to a destination node, but in case of problems, it might remain configured in the source node as migrating, but not configured as importing in the destination node.

The Redis auto scale operator runs the CLUSTER NODES CLI on each node, and looks for inconsistency in a slot state. In case it finds such a slot, it uses the CLUSTER SET SLOT slotId STABLE CLI to fix it. 


Repair C: Delete odd node


The Redis cluster is compound of master-slave pairs. In case the known redis nodes amount is odd, the Redis auto scale operator removed the last node. 

In case the last node is a master, it removes it using the REBALANCE --CLUSTER-WEIGHT nodeId=0 CLI. 

Then it deletes the node using the DEL-NODE nodeId CLI.

Lastly, it updates the replicas specification in the StatefulSet, and waits for the pod to terminate.


Repair D: Balance Empty Master


The Redis auto scale operator uses the CLUSTER NODES CLI to find any Redis master node that has no slots assigned to it. In case it finds one, it runs the REBALANCE --CLUSTER-USE-EMPTY-MASTERS CLI.


Repair E: Delete Redundant PVCs


The Redis auto scale operator lists the kubernetes PVCs (Persistence Volume Claims), and compare this list with the StatefulSet replicas count. In case a PVC is not used by any replica, it is deleted. The goal is to prevent a state where an old node, that was delete due to scale down, will start with empty configuration upon scale up. Otherwise, the node might have old configuration, and will not be able to join to the cluster.


Repair F: Convert double master pair


The Redis cluster is compound of a master-slave pairs, which means that sequential pods in the StatefulSet should be a master-slave pair. In some cases, due to Redis internal problems, a pair might be converted into a master-master pair. In such as case the Redis auto scale operator uses the REBALANCE --CLUSTER-WEIGHT nodeId=0 CLI to move the slots out of one of the masters, and the uses the CLUSTER REPLICATE CLI to convert it to a slave node.


Step 2: Cluster Scaling


One the Redis cluster is stable, we can easily scale it.

First, we need to decide how to set the required amount of the Redis nodes.
We can use the average CPU metrics among the Redis nodes. For example, in case the average is over 60% CPU, scale the cluster up. Another metric that is useful, is the application load, for example, requests per second. We can decide that each master-slave pair can handle up to 1000 requests per second, and set the required amount of nodes accordingly.

Then we check the actual amount of nodes using the CLUSTER INFO CLI, and scale up or down based on the required amount of nodes and actual amount of nodes


Scale up


To scale up the Redis auto scale operator sets the replicas specification of the StatefuleSet, wait for the pods to start.

Then the Redis auto scale operator uses the ADD NODE CLI and ADD NODE --CLUSTER-SLAVE CLI alternatively to add a master and a slave node.

Lastly, the Redis auto scale operator uses the REBALANCE --CLUSTER-USE-EMPTY-MASTERS CLI to move slots to the new nodes.


Scale Down


To scale down, the Redis auto scale operator uses the REBALANCE --CLUSTER-WEIGHT nodeId=0 CLI to remove the master node. 

Then it deletes the master and slave nodes using the DEL-NODE nodeId CLI.

Lastly, it updates the replicas specification in the StatefulSet, and waits for the pod to terminate, and deletes the PVC used by these nodes.


Final Notes


The major part of the Redis auto scale operator code is the Cluster repair, and it is not surprising. Things always go wrong, and a good operator should handle anything it can without a human intervention.