Full Blog TOC

Full Blog Table Of Content with Keywords Available HERE

Sunday, June 26, 2022

Using AWS Batch with boto3


 

In this post we will review usage of AWS Batch with boto3.

AWS Batch enables us tyo submit one time jobs to AWS. This is a great and simple compute force available to us at a great scale. In my case I had to initiate some data injestion jobs, each of them running for ~ 1 hour. Using AWS Batch I was able to parallelize these jobs at great scale, while all the infrastructure is managed by AWS. The actual jobs are simply docker containers that are run once per request.


To use AWS Batch, we first need to configure some entities:

  • Compute environment - this setups how and where to run the containers. This includes whether to use Fargate, Spot, or On-demand methods. In addition a VPC should be selected. Notice that if we need to access the internet (or even S3) we have two alternatives: using a private VPC with NAT gateway, or using a public VPC with IPv4 allocation for each container. Using the first method is safer. If we choose the second method, notice that the IPv4 allocation should be specified on the Fargate configuration.
  • Job queue - where the jobs are waiting for execution
  • Job Definition, configuration the docker image name, the execution role, and the CPU& memory requirements for each container.

Once these entities are ready, we can start running jobs.

To send the jobs from boto3, we create 2 classes. The first one represents a single job.



import time

import boto3

client = boto3.client('batch')


class AwsBatchJob:
def __init__(self, name, job_definition, arguments):
self.name = name
self.arguments = arguments
self.job_id = None
self.job_definition = job_definition

def start_job(self):
job_name = self.name
job_name = job_name.replace(' ', '_')
job_name = job_name.replace(':', '_')
job_name = job_name.replace('.', '_')
response = client.submit_job(
jobName=job_name,
jobQueue='my-queue',
jobDefinition=self.job_definition,
containerOverrides={
'command': self.arguments,
},
)
self.job_id = response['jobId']

def get_job_status(self):
response = client.describe_jobs(jobs=[self.job_id])
jobs = response['jobs']
job = jobs[0]
status = job['status']
return status

def wait_for_job(self):
while True:
status = self.get_job_status()
if status == 'FAILED':
raise Exception(f'job {self.name} id {self.job_id} failed')
if status == 'SUCCEEDED':
return
time.sleep(60)



The second class manages multiple jobs that are run in parallel.


from aws_batch_job import AwsBatchJob


class AwsBatches:
def __init__(self, main_python_file, job_definition):
self.main_python_file = main_python_file
self.job_definition = job_definition
self.batches = {}

def add_batch(self, name, batch_arguments):
self.batches[name] = batch_arguments

def run_and_wait(self):
jobs = {}
for batch_name, batch_argument in self.batches.items():
arguments = ['python3', self.main_python_file]
arguments.extend(batch_argument)
job = AwsBatchJob(batch_name, self.job_definition, arguments)
job.start_job()
jobs[batch_name] = job

for job in jobs.values():
job.wait_for_job()



A simple example for usage is below.


batches = AwsBatches('my-folder/my_python_main.py', 'my-job-definition')

batches.add_batch('my first job', ['arg1', 'arg2'])
batches.add_batch('my second job', ['arg3', 'arg4'])

batches.run_and_wait()



That's all.

This simple solution is great for small projects that don't require online response. Notice that the job starts within several minutes after it is launched, so we can use this solution only for, as the AWS Batch implies, batch processing.


Monday, June 20, 2022

Using JA3 to find TLS fingerprint

 



In this post we will review how to check TLS fingerprint using the JA3 library by SalesForce.


Transport Layer Security (TLS) fingerprinting is a technique that associates an application and/or TLS library with parameters extracted from a TLS ClientHello by using a database of curated fingerprints, and it can be used to identify malware and vulnerable applications and for general network visibility. 

          from: https://blogs.cisco.com/security/tls-fingerprinting-in-the-real-world


To check the fingerprint of connecting client we will do the following:

  • Create a TLS based HTTP server using python
  • Create self signed TLS key for the server
  • Run the server on a GCP VM
  • Capture the incoming traffic to a pcap file
  • Connect to the server using various clients
  • Use JA3 to print the various clients fingerprints


The code to run a TLS based HTTP server is very simple:


import http.server, ssl

server_address = ('0.0.0.0', 443)
httpd = http.server.HTTPServer(server_address, http.server.SimpleHTTPRequestHandler)
httpd.socket = ssl.wrap_socket(httpd.socket,
server_side=True,
certfile='s.pem',
ssl_version=ssl.PROTOCOL_TLS)
httpd.serve_forever()


Now we create a self signed TLS key:


openssl req -new -x509 -keyout s.pem -out s.pem -days 365 -nodes


Next, we use GCP to create a new VM with a public IPv4. This allows us to connect to the server from various clients around the world. Note that we need to edit the VM properties and enabled HTTPS connections to the VM. Once the VM is up and running, we run the python code above, and the server is ready.


To use JA3, we need to create a pcap file. Hence we start by listing the interfaces on the VM using the command:


ip link


And then we capture the traffic on the relevant interface:


tcpdump -i ens4 -s 65535 -w a.pcap


We can now connect from various clients, for example: Browsers, curl, python clients on various OS. After the connection, we stop the capturing, and use the following commands to collect the TLS fingerprints:


pip install pyja3
ja3 a.pcap



The output of the JA3 command prints the TLS fingerprints. Notice that it includes both the server fingerprint and the clients fingerprints.



  {
        "destination_ip": "10.128.0.21",
        "destination_port": 443,
        "ja3": "771,4866-4867-4865-4868-49196-49200-159-52393-52392-52394-49327-49325-49315-49311-49245-49249-49235-49195-49199-158-49326-49324-49314-49310-49244-49248-49234-49188-49192-107-49267-49271-196-49187-49191-103-49266-49270-190-49162-49172-57-136-49161-49171-51-154-69-49159-49169-49160-49170-22-157-49313-49309-49233-156-49312-49308-49232-61-192-60-186-53-132-47-150-65-5-10-255,11-10-22-23-13-43-45-51-21,29-23-30-25-24,0-1-2",
        "ja3_digest": "fd20b51c9b799da35cbf66c7b81f7a56",
        "source_ip": "72.195.34.41",
        "source_port": 40983,
        "timestamp": 1654167857.244541
    },
    {
        "destination_ip": "10.128.0.21",
        "destination_port": 443,
        "ja3": "771,4866-4867-4865-4868-49196-49200-163-159-52393-52392-52394-49327-49325-49315-49311-49245-49249-49239-49235-49195-49199-162-158-49326-49324-49314-49310-49244-49248-49238-49234-49188-49192-107-106-49267-49271-196-195-49187-49191-103-64-49266-49270-190-189-49162-49172-57-56-136-135-49161-49171-51-50-154-153-69-68-49159-49169-49160-49170-22-19-157-49313-49309-49233-156-49312-49308-49232-61-192-60-186-53-132-47-150-65-5-10-255,11-10-22-23-13-43-45-51-21,29-23-30-25-24,0-1-2",
        "ja3_digest": "c69dad62b497533e2e02a19470912253",
        "source_ip": "72.221.172.203",
        "source_port": 45675,
        "timestamp": 1654167857.333206
    },
    {
        "destination_ip": "74.125.202.95",
        "destination_port": 443,
        "ja3": "771,49199-49200-49195-49196-52392-52393-49171-49161-49172-49162-156-157-47-53-49170-10-4865-4867-4866,13172-0-5-10-11-13-65281-16-18-43-51,29-23-24-25,0",
        "ja3_digest": "706ea0b1920182287146b195ad4279a6",
        "source_ip": "10.128.0.21",
        "source_port": 56900,
        "timestamp": 1654167858.61766
    },
    {
        "destination_ip": "10.128.0.21",
        "destination_port": 443,
        "ja3": "771,4866-4867-4865-49196-49200-159-52393-52392-52394-49195-49199-158-49188-49192-107-49187-49191-103-49162-49172-57-49161-49171-51-157-156-61-60-53-47-255,11-10-35-22-23-13-43-45-51-21,29-23-30-25-24,0-1-2",
        "ja3_digest": "004556e859f3c26c5d19746b3a957c74",
        "source_ip": "104.168.87.16",
        "source_port": 34730,
        "timestamp": 1654167859.417917
    },












 

Monday, June 13, 2022

Using ECDSA in Android to Sign and Verify Messages



 

In this post we will review how to sign and verify messages in Android using ECDSA. This is an asymmetric encryption mechanism that is based on elliptic curve cryptography.


First, lets generate a private key. We will use a class with an empty constructor for the generation.


import org.bouncycastle.crypto.signers.StandardDSAEncoding;
import org.bouncycastle.jce.provider.BouncyCastleProvider;

import java.math.BigInteger;
import java.nio.charset.StandardCharsets;
import java.security.KeyFactory;
import java.security.KeyPair;
import java.security.KeyPairGenerator;
import java.security.PrivateKey;
import java.security.PublicKey;
import java.security.SecureRandom;
import java.security.Security;
import java.security.Signature;
import java.security.interfaces.ECPublicKey;
import java.security.spec.ECGenParameterSpec;
import java.security.spec.PKCS8EncodedKeySpec;
import java.security.spec.X509EncodedKeySpec;


public class EncryptionKey {

static {
Security.removeProvider("BC");//first remove default os provider
// this is working even it appears in the studio as compile error
BouncyCastleProvider bouncyCastleProvider = new BouncyCastleProvider();
Security.insertProviderAt(bouncyCastleProvider, 1);//add new provider
}

private final KeyPair keyPair;
private final String address;

public EncryptionKey() {
try {
KeyPairGenerator keyGen = KeyPairGenerator.getInstance("EC");
keyGen.initialize(new ECGenParameterSpec("secp256k1"), new SecureRandom());
this.keyPair = keyGen.generateKeyPair();
this.address = this.calculateAddress();
} catch (Exception e) {
throw new BouncerException(e);
}
}



The key can be exported as hex string:


public String exportKeyPair() {
byte[] bytesPrivateKey = this.keyPair.getPrivate().getEncoded();
byte[] bytesPublicKey = this.keyPair.getPublic().getEncoded();
String hexPrivateKey = BytesUtils.bytesToHexString(bytesPrivateKey);
String hexPublicKey = BytesUtils.bytesToHexString(bytesPublicKey);
return hexPrivateKey + " " + hexPublicKey;
}



And the public key can be exported as X and Y factors:



private String calculateAddress() {
ECPublicKey ecPublicKey = (ECPublicKey) this.keyPair.getPublic();
return BytesUtils.bigIntsToBase64(
"X", "Y",
ecPublicKey.getW().getAffineX(),
ecPublicKey.getW().getAffineY()
);
}



We can import from the exported hex string using another constructor:



public EncryptionKey(String exportedKeyPair) {
try {
String[] hexKeys = exportedKeyPair.split(" ");

String hexPrivateKey = hexKeys[0];
String hexPublicKey = hexKeys[1];

byte[] bytesPrivateKey = BytesUtils.hexStringToByteArray(hexPrivateKey);
byte[] bytesPublicKey = BytesUtils.hexStringToByteArray(hexPublicKey);

PKCS8EncodedKeySpec specPrivateKey = new PKCS8EncodedKeySpec(bytesPrivateKey);
X509EncodedKeySpec specPublicKey = new X509EncodedKeySpec(bytesPublicKey);

KeyFactory factory = KeyFactory.getInstance("ECDSA");

PrivateKey privateKey = factory.generatePrivate(specPrivateKey);
PublicKey publicKey = factory.generatePublic(specPublicKey);

this.keyPair = new KeyPair(publicKey, privateKey);
this.address = this.calculateAddress();
} catch (Exception e) {
throw new BouncerException(e);
}
}



Now the last thing to handle is the sign, and verify methods:



public String sign(String text) {
try {
Signature ecdsa = Signature.getInstance("SHA256withECDSA");

ecdsa.initSign(this.keyPair.getPrivate());

byte[] textBytes = text.getBytes(StandardCharsets.UTF_8);
ecdsa.update(textBytes);

byte[] signatureBytes = ecdsa.sign();
ECPublicKey publicKey = (ECPublicKey) this.keyPair.getPublic();
BigInteger order = publicKey.getParams().getOrder();
BigInteger[] bigInts = StandardDSAEncoding.INSTANCE.decode(order, signatureBytes);
BigInteger r = bigInts[0];
BigInteger s = bigInts[1];
return BytesUtils.bigIntsToBase64("r", "s", r, s);
} catch (Exception e) {
throw new BouncerException(e);
}
}


public void validate(String text, String signature) {
try {
Signature ecdsa = Signature.getInstance("SHA256withECDSA");

ecdsa.initVerify(this.keyPair.getPublic());

byte[] textBytes = text.getBytes(StandardCharsets.UTF_8);
ecdsa.update(textBytes);

BigInteger[] bigInts =BytesUtils.base64ToBigInts("r","s",signature);
BigInteger r = bigInts[0];
BigInteger s = bigInts[1];
ECPublicKey publicKey = (ECPublicKey) this.keyPair.getPublic();
BigInteger order = publicKey.getParams().getOrder();
byte[] bytes = StandardDSAEncoding.INSTANCE.encode(order, r,s);

if (!ecdsa.verify(bytes)) {
throw new BouncerException("verify failed");
}
} catch (Exception e) {
throw new BouncerException(e);update
}
}



To make this work, we need to add the depdendencies in the app/build.gradle:


dependencies {
implementation 'org.bouncycastle:bcpkix-jdk15on:1.64'
implementation 'org.bouncycastle:bcprov-jdk15on:1.64'



Final Note


This post is part of a series of posts about encryption. The full posts list is below.











Monday, June 6, 2022

Installing Kubernetes version 1.24 on your Development Machine

 

In a previous post, we've reviewed the process to install kubernetes locally on an Ubuntu development machine. However, since the change in kubernetes to deprecate docker, things changed. In addition, due to purpose of dynamically supporting various CRIs, the process is poorly documented.


To automate the installation, we will provide 3 different scripts:

  • Installation of the kube* binaries
  • Installation of Container Runtime Interface (CRI)
  • Installation of the kubernetes cluster

To install the kube* binaries we use the following:


echo "Update the apt"
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

echo "Download the GCP signing key"
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

echo "Add the Kubernetes apt repo"
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

echo "Update kube* binaries"
sudo apt-mark unhold kubelet kubeadm kubectl
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark unhold kubelet kubeadm kubectl

echo "Update kubectl manually"
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
rm kubectl



To install the CRI we use the following:


# Install docker
sudo apt-get remove -y docker docker-engine docker.io containerd runc
sudo apt-get autoremove -y

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --batch --yes --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update -y
sudo apt-get -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin



# Install docker CRI adapter
rm -rf ./cri
mkdir ./cri
CRI_VERSION=$(curl -s https://api.github.com/repos/Mirantis/cri-dockerd/releases/latest|grep tag_name | cut -d '"' -f 4)

wget https://github.com/Mirantis/cri-dockerd/releases/download/${CRI_VERSION}/cri-dockerd-${CRI_VERSION}-linux-amd64.tar.gz -P ./cri
wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.service -P ./cri
wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.socket -P ./cri

tar xvf ./cri/cri-dockerd-${CRI_VERSION}-linux-amd64.tar.gz -C ./cri

sudo mv ./cri/cri-dockerd /usr/local/bin/

sudo mv ./cri/cri-docker.socket ./cri/cri-docker.service /etc/systemd/system/
sudo sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service

sudo systemctl daemon-reload
sudo systemctl enable cri-docker.service
sudo systemctl enable --now cri-docker.socket

rm -rf ./cri



Notice that the CRI installation includes both the CRI installation itself, that is, the docker installation, and the adapter installation, which glues the docker CRI to kubernetes APIs.


Finally to install the kubernetes cluster, we use the following:



# Remove cluster
sudo rm -rf $HOME/.kube
sudo rm -rf /var/lib/etcd
sudo rm -rf /etc/cni/net.d
sudo kubeadm reset -f --cri-socket /run/cri-dockerd.sock

# Install cluster
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --cri-socket /run/cri-dockerd.sock

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl taint nodes --all node-role.kubernetes.io/control-plane- node-role.kubernetes.io/master-

# Install calico
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
rm -f calico.yaml

# Install Helm
rm -rf ~/.helm
sudo rm -rf /usr/local/bin/helm
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh
rm ./get_helm.sh



This includes also the Container Network Interface installation (CNI), which is this case we use calico, and an optional, but mostly required, installation of helm.