Wednesday, June 30, 2021

Using WebPack to Create Separate Distribution for Internet Explorer


 


In this post we will use webpack to create two distributions. The first would be for Internet Explorer, and the second would be for all other browsers. This would enable us to reduce the size of the distribution for most of the end users, and supply a compatible distribution for IE users.


First we need to update the build to run the webpack twice, once for the default users, and once for IE users.


package.json

...
"scripts": {
"build": "webpack --config webpack.config.default.js && webpack --config webpack.config.ie.js",
...



The IE webpack includes the following:


webpack.config.ie.js

const config = {


... // skipping non relevant configuration

entry: {
index: ['core-js/stable', path.resolve(__dirname, './index.js')],
},
output: {
path: path.resolve(__dirname, 'build/ie/'),
},
}


config.module.rules[1].use.options.presets = [
[
'@babel/preset-env',
{
'debug': false,
'targets': {
'ie': '11',
},
'useBuiltIns': 'usage',
'corejs': {
'version': 3,
},
},
],
]


For more details of IE transpilation, see this post.



The default webpack includes:


webpack.config.default.js

const config = {

... // skipping non relevant configuration

entry: {
index: [path.resolve(__dirname, './index.js')],
},
output: {
path: path.resolve(__dirname, 'build/default'),
},
}

config.module.rules[1].use.options.presets = ['@babel/preset-env']


Now, running the build process creates two folders: build/ie for the IE distribution, and build/default for all of the other browsers.


In case using NGINX as a web server for the distribution, it can be configured to use the relevant distribution folder, see this post for details.



Configure NGINX to use Different HTML for Internet Explorer Browsers



 

In this case we will configure NGINX to use a different set of HTML files for different browsers. This is required for different javascript files, especially if we want to support Internet Explorer browsers, that according to some global statistics are still being used by ~2% of of end users.


To use a different set of files, we configure the root folder according to the user agent string.


nginx.conf

...

map $http_user_agent $root {
default "/app/public/default";
"~rv:11\." "/app/public/ie";
}

server {
listen 8080;

...



The regular expression ~rv:11 will match for IE11 browsers, see the list of IE user agents here.

Then we can use the $root variable to configure the root folder of a location:


nginx.conf

...

location / {
root $root;

...



This  could be further used for different user agents, and not only for IE.



Final Note


In case you're using webpack to create distribution, see this post for a method of creating different distributions for IE.

 

Wednesday, June 23, 2021

Java Hibernate Second Level Cache


Using a hibernate second level cache is useful for monolith applications that have a single process accessing an RDBMS  such as PostgreSQL.

In case of multiple JVMs, the second level cache might be an issue, as each JVM has its own second level cache, and this would cause inconsistencies.

First, lets add add dependencies to the pom.xml:



<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<version>5.5.0.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-ehcache</artifactId>
<version>5.5.0.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-jcache</artifactId>
<version>5.5.0.Final</version>
</dependency>



Next, configure the cache behavior in the application.properties:



spring.jpa.properties.hibernate.cache.use_second_level_cache=true
spring.jpa.properties.hibernate.cache.use_query_cache=true
spring.jpa.properties.hibernate.cache.region.factory_class=org.hibernate.cache.ehcache.EhCacheRegionFactory
spring.jpa.properties.javax.persistence.sharedCache.mode=ENABLE_SELECTIVE



For each entity, that should be cached, add the annotations:



@Entity
@Table
@Cacheable
@org.hibernate.annotations.Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
public class MyEntity {


Old method queries should add hint to enable cache usage:




CriteriaBuilder criteriaBuilder
= entityManager.getCriteriaBuilder();
CriteriaQuery<ControlElement> criteriaQuery = criteriaBuilder.createQuery(MyEntity.class);
criteriaQuery.from(MyEntity.class);
TypedQuery<MyEntity> query = entityManager.createQuery(criteriaQuery);
query.setHint("org.hibernate.cacheable", Boolean.TRUE);
List<MyEntity> result = query.getResultList();



See more details in the following blog: https://www.baeldung.com/hibernate-second-level-cache



Wednesday, June 16, 2021

Google SRE - Applied Changes

 

Lately I have been reading the book Building Secure and Reliable Systems:



This book is very relevant for the current project, which had started about a year ago, and now is starting to acquire customers, hence we are looking for the principles of stability and security. These terms are not new to me, but this book has an interesting point of view combining the security and the stability terms in the same methodology.


Our project is using a kubernetes cloud based platform, and based on the methods presented in the book, I've made the following changes to the project.



Kibana View Only User


We are using Kibana and ElasticSearch to view the application status. We've had previously used ElasticSearch in a non secured mode, while counting on our authentication service to block unauthorized users, and on the Kubernetes Ingress to encrypt the traffic. But we have found that some of our users should only use the dashboards, and we do not want them to be able to update the Kibana dashboards. Hence we have started with TLS configuration the the ElasticSearch, which later allowed us to add a view only user in kibana. This can be automated using the following script:



#!/usr/bin/env bash

AUTH_ARG="-u elastic:mypassword"

function createRole(){
cat << EOF > ./input.json
{
"elasticsearch":{
"cluster":[],
"indices":[
{
"names":["*"],
"privileges":["read"],
"allow_restricted_indices":false
}
]
},
"kibana":[
{
"base":["read"],
"spaces":["default"]
}
]
}
EOF
curl ${AUTH_ARG} -s -X PUT -H 'kbn-xsrf: true' -H 'Content-Type: application/json' 'http://localhost:5601/api/security/role/my_viewer_role' --data-binary "@input.json"
}

function createUser(){
cat << EOF > ./input.json
{
"password": "myviewpassword",
"roles": ["my_viewer_role"]
}
EOF
curl ${AUTH_ARG} -s -k -X POST -H 'Content-Type: application/json' ${ELASTICSEARCH_HOSTS}/_security/user/myview --data-binary "@input.json"
}

createRole
createUser



Pods Anti Affinity


We want to ensure that in case a kubernetes node crashes, we will not have one of our microservices down. This is done by starting at least 2 replicas of each critical microservice, and using anti affinity rule to request kubernetes not to schedule 2 pods of the same microservice on the same node. Notice that in this case, since the kubernetes nodes amount is not very high, we set the anti affinity as a recommendation, not as an enforcement.


apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 2
selector:
matchLabels:
configid: my-container
template:
metadata:
labels:
configid: my-container
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: configid
operator: In
values:
- my-container
containers:
- name: ...



Jenkins Merge Job


For some time, to update the production environment, we have manually merged the GIT dev branch to the master branch. As we wanted to reduce manual action items, and mistakes risks, we have added a merge job to automate the merge.



#!/usr/bin/env bash

git checkout -B dev
git pull

git checkout -B master
git pull

git merge dev -m "merge by jenkins"
git push --set-upstream ....



Auditing


Auditing enables use to find problems and malicious actions. We have added the following auditing.

  • Audit log of successful and failed logins to the authentication service
  • Audit of user name and the change in the management service



Fail-Safe


We've have some cases where our Redis DB was down. While we strive to fully prevent some cases, we cannot always fix all corner cases. We have decided that in case our DB is down, we will change the mode of the services as a "pass-through", meaning that we will change the mode of the services to enable any possible operation without accessing the DB. This leave the system as a partly functioning system, but this is better than a fully non operational system.


Final Note


This are only the first steps toward a more secure and stable product, I will keep updating in the future as I progress in the book reading, and apply more changes.


Wednesday, June 9, 2021

Sending Parameters to Helm Named Template



 

Somehow, sending parameters to an Helm named template is poorly documented, so I give some clear example below. This is based on the Helm documentation functions in this page.


Assuming we have the following sample Helm named template:


{{- define "myNamedTemplate" }}
- name: {{ .Values.myParam1 }}
value: {{ .Values.myValue1 | quote }}
- name: {{ .Values.myParam2 }}
value: {{ .Values.myValue2 | quote }}
{{- end }}


To send flat parameters to Helm named template, use should use dict, for example:


{{- $myValues := dict "myParam1" "myValue1" "myParam2" "myValue2" -}}
{{- $myParameters := dict "Values" $myValues -}}
{{- include "myNamedTemplate" $myParameters }}


Or if hierarchical parameters is required, you can use the following:


{{- $myValues := dict -}}
{{- $myInnerValues := dict "innerVariable" "innerValue" -}}
{{- $_ := set $myValues "inner" $myInnerValues -}}
{{- $myParameters := dict "Values" $myValues -}}
{{- include "myNamedTemplate" $myParameters }}

 


If case the named template uses a .Values.global.VARIABLE , add the global dictionary as well:


{{- $myValues := dict "myParam1" "myValue1" "myParam2" "myValue2" -}}
{{- $_ := set $myValues "global" .Values.global -}}
{{- $myParameters := dict "Values" $myValues -}}
{{- include "myNamedTemplate" $myParameters }}


To override existing parameters, only for a specific activation, use the following:


{{- $myValues := .Values | deepCopy -}}
{{- $_ := set $myValues "myOverrideParam" "overridenValue" -}}
{{- $myParameters := dict "Values" $myValues -}}
{{- include "myNamedTemplate" $myParameters }}


To override just a single global variable, use the following:


{{- $myValues := .Values | deepCopy -}}
{{- $_ := set $myValues "global" .Values.global -}}
{{- $_ := set $myValues.global.OVERRIDE_GLOBAL_PARAMETER_NAME "OVERRIDE_GLOBAL_PARAMETER_VALUE" -}}
{{- $myParameters := dict "Values" $myValues -}}




Monday, June 7, 2021

Configure TLS for ElasticSearch in a Kubernetes Deployment



In this post I will review the steps to configure TLS for existing ElasticSearch, Kibana,FileBeat in a kubernetes deployment. All of these steps are performed automatically as part of a helm chart deployment. The only input for a pre-install helm hook are the ElasticSearch credentials.


Some of the items in this post are derived from the formal ElasticSearch helm chart.


The following updates were done:

  • Pre-install hook-  Create TLS certificates, and create credentials secret
  • ElasticSearch statefulset - Add environment variables to enable TLS
  • Kibana deployment - Update kibana.yml to use the TLS
  • FileBeat daemonset - Update filebeat.yaml to use the TLS
These steps are described below.


Pre-Install Hook


The first change is to create a job, that is run as a pre-install helm hook. The job runs a script that receives as input the required ElasticSearch cerdentials, and creates a kubernetes secret. The script also generates a Certificate Authority (CA), and self signs a key for the ElasticSearch server. The CA will be used by the ElasticSearch server, as well as by others: Kibana and FileBeat.



#!/usr/bin/env bash

credentialsSecretName=elastic-credentials
mkdir /certificates
# This is the name of the kubernetes service for elastic search
master=elasticsearch-rest-service

echo "===> Create CA"
elasticsearch-certutil ca \
--out /certificates/elastic-stack-ca.p12 \
--pass ''

echo "===> Create certificate"
elasticsearch-certutil cert \
--name ${master} \
--dns ${master} \
--ca /certificates/elastic-stack-ca.p12 \
--pass '' \
--ca-pass '' \
--out /certificates/elastic-certificates.p12

echo "===> Convert certificate"
openssl pkcs12 -nodes -passin pass:'' -in /certificates/elastic-certificates.p12 -out /certificates/elastic-certificate.pem
openssl x509 -outform der -in /certificates/elastic-certificate.pem -out /certificates/elastic-certificate.crt

echo "===> Extract CA chain"
openssl pkcs12 -passin pass:'' -in /certificates/elastic-certificates.p12 -cacerts -nokeys -out /certificates/elastic-ca-chain.pem

echo "===> Create CA secret"
kubectl create secret generic elastic-certificates --from-file=/certificates/elastic-certificates.p12

echo "===> Create CA chain secret"
kubectl create secret generic elastic-ca-chain --from-file=/certificates/elastic-ca-chain.pem

echo "===> Create certificate pem secret"
kubectl create secret generic elastic-certificate-pem --from-file=/certificates/elastic-certificate.pem

echo "===> Create certificate crt secret"
kubectl create secret generic elastic-certificate-crt --from-file=/certificates/elastic-certificate.crt

echo "===> Create credentials secret"
kubectl create secret generic ${credentialsSecretName} --from-literal=password=${ELASTICSEARCH_PASSWORD} --from-literal=username=${ELASTICSEARCH_USER}


ElasticSearch


The ElasticSearch statefulset should be configured to use TLS, so we add the certificates secret as volume:


volumes:
- name: elastic-certificates
secret:
secretName: elastic-certificates


and map the volume to the ElasticSearch container:


volumeMounts:
- name: elastic-certificates
mountPath: /usr/share/elasticsearch/config/certs


Next, we add the TLS enabling environment variables:


- name: xpack.security.enabled
value: "true"
- name: xpack.security.transport.ssl.enabled
value: "true"
- name: xpack.security.transport.ssl.verification_mode
value: "certificate"
- name: xpack.security.transport.ssl.keystore.path
value: "/usr/share/elasticsearch/config/certs/elastic-certificates.p12"
- name: xpack.security.transport.ssl.truststore.path
value: "/usr/share/elasticsearch/config/certs/elastic-certificates.p12"
- name: xpack.security.http.ssl.enabled
value: "true"
- name: xpack.security.http.ssl.truststore.path
value: "/usr/share/elasticsearch/config/certs/elastic-certificates.p12"
- name: xpack.security.http.ssl.keystore.path
value: "/usr/share/elasticsearch/config/certs/elastic-certificates.p12"


And the credentials environment variables:


- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-credentials
key: password
- name: ELASTIC_USERNAME
valueFrom:
secretKeyRef:
name: elastic-credentials
key: username


Kibana


For the Kibana deployment, we should add the CA secret volume


volumes:
- name: elastic-ca-chain
secret:
secretName: elastic-ca-chain


and mount it to the Kibana container


volumeMounts:
- name: elastic-ca-chain
mountPath: /ssl-certificates/elastic-ca-chain.pem
subPath: elastic-ca-chain.pem


Then add environment variables for the ElasticSearch credentials


- name: ELASTICSEARCH_USERNAME
valueFrom:
secretKeyRef:
name: elastic-credentials
key: username
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-credentials
key: password


and update kibana.yml with the TLS configuration


elasticsearch.hosts: ["https://elasticsearch-rest-service"]
elasticsearch.username: "elastic"
elasticsearch.password: "elastic"
elasticsearch.ssl.certificateAuthorities: [ "/ssl-certificates/elastic-ca-chain.pem" ]
xpack.monitoring.elasticsearch.ssl.verificationMode: "certificate"
server.host: "0.0.0.0"


FileBeat


In the FileBeat daemonset we should add the CA secret volume


volumes:
- name: elastic-ca-chain
secret:
secretName: elastic-ca-chain


and mount it to the Kibana container


volumeMounts:
- name: elastic-ca-chain
mountPath: /ssl-certificates/elastic-ca-chain.pem
subPath: elastic-ca-chain.pem


Then add environment variables for the ElasticSearch credentials


- name: ELASTICSEARCH_USERNAME
valueFrom:
secretKeyRef:
name: elastic-credentials
key: username
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-credentials
key: password



And update filebeat.yaml with the TLS configuration


output.elasticsearch:
hosts: ['https://elasticsearch-rest-service']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
ssl:
certificate_authorities: ["/ssl-certificates/elastic-ca-chain.pem"]








Wednesday, June 2, 2021

Using NGINX LUA for Customized Auth Requests



 

In this post we will use NGINX LUA to create a customized auth request. In our case we will use a POST request to the auth server, and will check the response header to decide if we want to allow the access.


To create an NGINX with LUA support, see my previous post.

Once we have a LUA enabled NGINX, we can configure a protected location:


nginx.conf

server {
listen 8080;
server_name "~.*";

location / {
access_by_lua_block {
if require("myscript").validate("http://my-auth.com") then
return
end

ngx.exit(ngx.HTTP_FORBIDDEN)
}

proxy_pass "http://my-server.com";
}
}



The LUA script in access_by_lua_block, will run our script to validate the request, and in case the result is false, it will block access to the proxy.


The myscript.lua file should reside in /my-lua/myscript.lua (as this is the folder we have specified in the nginx.conf). Since we want to send a request to my-auth.com, we should also include the resty.http library files under /my-lua/resty/. The 3 files can be downloaded from here.


myscript.lua

local myscript = {}

function string.starts(String,Start)
return string.sub(String,1,string.len(Start))==Start
end

local function validate(authUrl)
local httpc = require('.resty.http').new()

local originalHeaders = ngx.req.get_headers()
headers['Content-Type'] = 'application/json'
headers['originalHost'] = ngx.var.host
headers['user'] = headers['user']

local res, err = httpc:request_uri(authUrl, {
method = 'POST',
body = '{}',
headers = headers,
})

if not res then
ngx.log(ngx.STDERR, 'auth request failed: ', err)
return false
end

local status = res.status
local body = res.body
if status ~= 200 then
ngx.log(ngx.ERR, 'auth request returned error: ', err, status, body)
return false
end

local dormant = res.headers['allow']
if allow == 'false' then
return false
end

return true
end

myscript.validate = validate

return myscript



The validate function sends a POST request to the auth server. It adds some headers to the request, sends it, and check the response header to decide whether the auth server had allowed or blocked the access.



Create a LUA enabled NGINX docker image



 

In this post we will review the steps to create a LUA enabled docker image.


NGINX LUA, is a JIT based scripting enable a highly customizable behavior.


The standard docker image is simply:


Dockerfile:

FROM openresty/openresty:1.19.3.1-8-centos7



On the nginx.conf, as part of the main context, we will need to enable LUA JIT:



pcre_jit on;



and as part of the http context of the nginx.conf, we will need to add the folder where our LUA scripts reside:



http {
lua_package_path '/usr/local/lib/lua/?.lua;/my-lua/?.lua;;';



Now, the NGINX is ready for LUA scripting.


To see an example of LUA scripts, check the next post: using NGINX LUA for customized auth requests.