Tuesday, May 25, 2021

Intercept HTML Forms Submit


 

In this post we will find a robust method of intercepting all HTML forms submit. 

One method to achieve the interception, is presented in this post, which uses addEventListener, and preventDefault. The problem is that in case another javascript code had also used addEventListener, it is not guaranteed which event listener would run first. Also, there are cases where preventDefault does not stop the event from triggering other javascript code.


The first thing to do is to run a scheduled code to scan HTML elements, and look for forms. This is required in case an HTML form is dynamically added to the page.



function captureForms() {
setInterval(scheduledCaptureForms, 1000)
}



Upon the scheduled activation, we find all forms, and update their submit to use an alternative custom code - our submit handled, instead of the original submit. We also make sure to do this replacement only once.



function scheduledCaptureForms() {
const forms = document.getElementsByTagName('form')
for (const form of forms) {
const key = 'formWasIntercepted'
if (!form[key]) {
form.originSubmit = form.submit
form.submit = function () {
submitHandlerAsync(form)
return false
}
form[key] = true
}
}
}



Finally, on our asynchronous submit handler, we can do what ever we want, and even activate the original submit once we've made our modifications.



async function submitHandlerAsync(form) {
let fetchUrl = form.action
if (!fetchUrl) {
fetchUrl = document.location.href
}

/*
do something before calling the original submit action
*/

form.originSubmit()
}







 

Monday, May 17, 2021

Solving Nodes Communication Problem with Calico


TL;DR 

Problem: communication issues between pods on different nodes

Identification: describe on the non-ready calico pod in the kube-system namespace shows the error "calico/node is not ready: BIRD is not ready: BGP not established"

Solution:

kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=can-reach=www.google.com



The Full Story


Few days ago, our test kubernetes started having communication issues:

Some of the pods where unable to communicate with other pods. Checking deeper into this I've found that the problem was communication between pods that reside on different nodes.






The kubernetes diagram displayed here demonstrate the problem. Green arrows represent success communication, an red arrow represent a failed communication. 

Communication between all the nodes is working fine, as well as communication between the pods within each node. However, the communication between pods in the node-c to pods in other nodes fails. This is a weird status, as the communication between node-b and node-c is fine.

As this is our test environment, which was working fine for several months, I assumed someone had messed it up, so i tried rebooting the nodes, reinstalling the kubernetes, and even manually override the DNS resolving, but the problem remained.

Finally, I realized this is a kubernetes CNI issue. 

In this bare metal kubernetes, we are using Calico for CNI.

I've checked the calico pods status:



kubectl get pods -n kube-system



An I've found that the calico pod on the node-C is not in a Ready state:



NAME                                       READY   STATUS    RESTARTS   AGE   IP                NODE
calico-node-f9dgm                          1/1     Running   0          45h   10.195.5.136      node-a   
calico-node-g69z9                          1/1     Running   0          45h   10.195.5.135      node-b   
calico-node-xb92h                          0/1     Running   0          45h   10.195.5.133      node-c   



I've run kubectl describe for the non-ready pod, and found the readiness probe errors:



Warning  Unhealthy       32m  kubelet  Readiness probe failed: 2021-05-16 08:40:25.359 [INFO][199] confd/health.go 180: Number of node(s) with BGP peering established = 0
calico/node is not ready: BIRD is not ready: BGP not established with 10.195.5.135,10.195.5.136


And then I've found the following bug:
calico/node is not ready: BIRD is not ready: BGP not established


Which means that calico had selected the wrong IP address for the node. I have used the recommended solution to force the calico selection of a IP address with external network connectivity:


kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=can-reach=www.google.com


Other IP address selection methods are available here.



Wednesday, May 12, 2021

Using selenium on NodeJS


 


In this post we will review how to use selenium on NodeJS to automate browser usage. This is usually used for automatic site testing. We will make some changes which are specifically useful for such scenario.

First, lets install the chormedriver from this link. Make sure to download the zip file which match you chrome browser version, unzip it, and move the chormedriver to a folder which is in the path, for example:



sudo mv chromedriver /usr/local/bin/



Now for the project: create the package.json file:



{
"name": "demo",
"version": "1.0.0",
"description": "",
"author": "",
"license": "ISC",
"scripts": {
"demo": "node main.js"
},
"dependencies": {
"chrome-modheader": "^1.0.6",
"selenium-webdriver": "^4.0.0-beta.3"
}
}



We use the selenium driver, and add the chrome-modheader, which allows us to set headers on the requests. Setting header can be used for A/B testing, and for setting the XFF header to simulate a source IP.

The general code structure is as follows:



const {Builder, until, By, logging} = require('selenium-webdriver')
const chrome = require('selenium-webdriver/chrome')
const {getExtension, getAddHeaderUrl} = require('chrome-modheader')

main()

async function main() {
// our code here
}



We start our code by starting a browser, and setting a random IP in the XFF header.



const preferences = new logging.Preferences()
preferences.setLevel(logging.Type.BROWSER, logging.Level.ALL)

const options = new chrome.Options()
options.setLoggingPrefs(preferences)
options.addArguments('--ignore-certificate-errors')
options.addArguments('--no-sandbox')
options.addExtensions(getExtension())


const driver = await new Builder()
.forBrowser('chrome')
.setChromeOptions(options)
.build()

function getRandomIpSection() {
return Math.ceil(Math.random() * 256)
}

const ip = `${getRandomIpSection()}.${getRandomIpSection()}.${getRandomIpSection()}.${getRandomIpSection()}`
console.log(`random IP is ${ip}`)
await driver.get(getAddHeaderUrl('X-Forwarded-For', ip))



Now we open our site, and cleanup cookies for a fresh session:



await driver.get('http://my.site.com/')
await driver.manage().deleteAllCookies()



We can locate elements by CSS and by Xpath:



const BUTTON_SELECTOR = '#features a.read-more:first-of-type'
const button = await driver.wait(until.elementLocated(By.css(BUTTON_SELECTOR)), 10000)

const TEXT_SELECTOR = '//div/span[contains(@style,\'color: green\')]'
const text = await driver.wait(until.elementLocated(By.xpath(TEXT_SELECTOR)), 10000)



We can click on items. In this case we click on the element that we have just located.



button.click()



We can run our own javacript code in the page:



await driver.executeScript(`
window.enableMyDebugLog=true
`)



We can sleep, waiting for something:



await driver.sleep(2000)



and we can scan the console logs:



const logs = await driver.manage().logs().get(logging.Type.BROWSER)
for (const log of logs) {
const message = log.message
if (message.includes("my-log-data")) {
console.log(message)
}
}



Eventually to close the browser, use quit:



await driver.quit()




Wednesday, May 5, 2021

go-redis using SCAN command in a Redis Cluster



  

In this post we will present how to use redis SCAN command in a cluster environment and go-redis library.

The go-redis library automatically handled a single key commands such as GET, SET. It recognizes the location of each slot on a relevant master, and address the master (or slave) that hold the specific key.

However, the SCAN command is a multi-keys related, and hence the go-redis does not handle it.

The way to handle it is using the ForEachMaster command, and run the SCAN command on each master, and finally aggregate the result. 

Additional item to handle is to maintain a cursor per each master, and detect end of data in all of the masters.

The cursor per master structure is listed below:



type cursorData struct {
locations map[string]uint64
endOfData map[string]bool
}

func (d *cursorData) EndOfData() bool {
for _, end := range d.endOfData {
if !end {
return false
}
}
return true
}



Next we can use the ForEachMaster to run SCAN:


func Scan(
client *redis.ClusterClient,
cursor redisclients.CursorInterface,
match string,
count int64,
) ([]string, redisclients.CursorInterface) {
var cursorPerMaster *cursorData
if cursor == nil {
cursorPerMaster = &cursorData{
locations: make(map[string]uint64),
endOfData: make(map[string]bool),
}
} else {
var ok bool
cursorPerMaster, ok = (cursor).(*cursorData)
if !ok {
panic("conversion failed")
}
}

allKeys := make([]string, 0)
mutex := sync.Mutex{}

err := client.ForEachMaster(context.Background(), func(ctx context.Context, master *redis.Client) error {
key := master.String()

mutex.Lock()
alreadyDone := cursorPerMaster.endOfData[key]
mutex.Unlock()

if alreadyDone {
return nil
}

mutex.Lock()
masterCursor := cursorPerMaster.locations[key]
mutex.Unlock()

cmd := master.Scan(ctx, masterCursor, match, count)
err := cmd.Err()
if err != nil {
return err
}

keys, nextCursor, err := cmd.Result()
if err != nil {
return err
}

mutex.Lock()
allKeys = append(allKeys, keys...)
cursorPerMaster.locations[key] = nextCursor
cursorPerMaster.endOfData[key] = nextCursor == 0
mutex.Unlock()

return nil
})

if err != nil {
panic(err)
}

return allKeys, cursorPerMaster
}



We should hide out implementation using an interface:


type CursorInterface interface {
EndOfData() bool
}



An example for using this API is:


firstTime := true
var cursor redisclients.CursorInterface
for {
var keys []string
count := int64(10000)
match := "*"
if firstTime {
firstTime = false
keys, cursor = Scan(client, nil, match, count)
} else {
keys, cursor = Scan(client, cursor, match, count)
}
fmt.Print(keys)
if cursor.EndOfData() {
return
}
}


In case you want a simpler usage, for debug and test environment only, checkout the KEYS implementation in this post.






go-redis using KEYS command in a Redis Cluster

 

In this post we will present how to use redis KEYS command in a cluster environment and go-redis library.

The go-redis library automatically handled a single key commands such as GET, SET. It recognizes the location of each slot on a relevant master, and address the master (or slave) that hold the specific key.

However, the KEYS command is a multi-keys related, and hence the go-redis does not handle it.

The way to handle it is using the ForEachMaster command, and run the KEYS command on each master, and finally aggregate the result. An example for this is listed below.


func ClusterKeys(client *redis.ClusterClient, pattern string) []string {
allKeys := make([]string, 0)
mutex := sync.Mutex{}
err := client.ForEachMaster(context.Background(), func(ctx context.Context, master *redis.Client) error {
cmd := master.Keys(ctx, pattern)
err := cmd.Err()
if err != nil {
return err
}

value, err := cmd.Result()
if err != nil {
return err
}
mutex.Lock()
allKeys = append(allKeys, value...)
mutex.Unlock()
return nil
})

if err != nil {
panic(err)
}

return allKeys
}


Notice that usage of the redis KEYS command in a production environment is not recommended. You might want to use SCAN command instead. For more details see this post.