Saturday, December 28, 2019

Helm 2 to Helm 3 Upgrade




I've recently updated a project to use Helm 3 instead of Helm 2.
The official article of how to migrate from Helm v2 to Helm v3 mostly refers to the steps of migrating installed releases from Helm 2 to Helm 3, but does not relate to the development steps.
So, I've listed the changes I made to the project setup and build as part of the migration.


Helm setup on the Kubernetes Cluster 


Tiller is out

Tiller is not longer used, so the setup of Helm is different.
You no longer need to run helm init.


Add the stable repo

Helm no longer has the helm init step, but this is a down side.
The helm init used to add the stable repo to the Helm repos.
In case you want to use charts from the stable repo, you need to manually add them:

helm repo add stable https://kubernetes-charts.storage.googleapis.com


The Release Lifecycle


Helm install

Helm v2 had auto generated the release name in case it was not specified.
Helm v3 requires (by default) the release name, hence the helm install syntax had changed.

Instead of:

helm install --name my-release-name ...

Use:

helm install my-release-name ...


Create Namespace

Helm v2 had created the namespace in case it did not exist.
In Helm v3 you will need to manually create it, for example:

kubectl create namespace my-namespace


Helm Delete

Helm v2 kept history of the uninstalled releases, and most of the Helm users used the --purge flag to avoid this behavior, as it prevents reinstalling of the same release (see this issue).
Helm v3 uses the purge flag by default. Also the delete command is an alias to uninstall.

So, instead of:

helm delete my-release-name --purge

Use:
helm uninstall my-release-name


Summary


In this post we have reviewed the development steps required to upgrade a project from using Helm version 2 to use Helm version 3. In general the upgrade was smooth, and not changes were required in the charts themselves. 

Wednesday, December 18, 2019

Helm init failure bypass

Notice:
This post is relevant to Helm version 2
This issue is NOT relevant to Helm version 3

Trying to install helm on a new kubernetes version fails.
Running helm init fails with the error message:

the server could not find the requested resource

Looking this error in the web, I've found the bypass in one of the helm issues.

Since I need to install Helm many times (I love to uninstall and reinstall my kubernetes cluster), I've automated this process:

helm init --output yaml > tiller.yaml
sed -i 's/extensions\/v1beta1/apps\/v1/g' tiller.yaml
sed -i '/strategy: {}/a \  selector:\n    matchLabels:\n      app: helm\n      name: tiller' tiller.yaml
kubectl apply -f tiller.yaml
rm -f tiller.yaml

And if you also want to include the permissions binding, run this as well:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

Simple copy and paste of these commands to save manual editing and updating of the kubernetes resources.

Access kubernetes API from GoLang


When implementing a GoLang based kuebernetes service, you will need to access the kubernetes API.
The problem I've encountered in this, is the following:

  • When running the service as part of a kubernetes pod, you need to initialize the kunernetes client using one API: InClusterConfig()
  • When debugging the service (for example using GoLand), you need to to initialize the kunernetes client using a second API: BuildConfigFromFlags()
The solution I've made is to use flags to the application indicating which API to use.
The default flags are for running as part of a kubernetes pod, while enabling changing the default for a debug session.

The service configuration implementation is:

import (
  "encoding/json"
  "github.com/namsral/flag"
  "os"
  "os/exec"
  "path/filepath"
)

type Configuration = struct {
  InCluster                  bool
  KubeConfig                 string
}

var Config Configuration

func InitConfig() {
  defaultKubeConfig := filepath.Join(os.Getenv("HOME"), ".kube", "config")

  flag.BoolVar(&Config.InCluster, "in-cluster", true,
   "run within a kubernetes cluster pod")
  flag.StringVar(&Config.KubeConfig, "kubeconfig", defaultKubeConfig, 
   "absolute path to the kubeconfig file")

  flag.Parse()
}


And the kubernetes client configuration is:

import (
  "k8s.io/client-go/kubernetes"
  appsV1 "k8s.io/client-go/kubernetes/typed/apps/v1"
  coreV1 "k8s.io/client-go/kubernetes/typed/core/v1"
  "k8s.io/client-go/rest"
  "k8s.io/client-go/tools/clientcmd"
)

type K8sClient struct {
  client *kubernetes.Clientset
}

func (client *K8sClient) Init() {
  var config *rest.Config
  var err error
  if Config.InCluster {
    Verbose("getting k8s config in cluster")
    config, err = rest.InClusterConfig()
  } else {
    Verbose("getting k8s config out of cluster, path %v", Config.KubeConfig)
    config, err = clientcmd.BuildConfigFromFlags("", Config.KubeConfig)
  }
  if err != nil {
    log.Fatalf("kubernetes client config failed: %v", err)
  }
  clientSet, err := kubernetes.NewForConfig(config)
  if err != nil {
    log.Fatalf("kubernetes client set failed: %v", err)
  }
  client.client = clientSet
}

That's all!

When running the service in debug mode, send the following argument:
--in-cluster=false

And you're good to go. The service will automatically use the kubernetes configuration from your home folder.

Thursday, December 12, 2019

Deploy Smart Contract on Ethereum using GoLang


This article presents all the steps required to deploy and use a contract using Go.

Few other posts about this exist:
So, I've decided creating this article to cover all the steps, hoping you will find it useful.
The article is setup for a private Ethereum network, but can be also used for the public Ethereum network.


We will review the following steps:
  1. Create a Smart Contract in Solidity
  2. Manually Compile the Contract to Go
  3. Create a Go Application 
  4. Create a 3 stages Dockerfile to compile the contract and the application 

1. Create Smart Contract in Solidity



Contracts are created using solidity. In this article, a simple contract with a call method and a transaction method. See this for explanation about the difference between a call and a transaction.

pragma solidity ^0.5.8;

contract Price {
  uint price = 100;

  function setPrice(uint newPrice) external payable  {
    price = newPrice;
  }

  function getPrice() external view returns (uint) {
    return price;
  }
}

The contract contains the following methods:

  • setPrice: a transaction method
  • getPrice: a call method

2. Manually Compile the Contract to Go



To use a contract in Go, we will need to generate a Go file for it.

First install the solc: the solidity compiler as explained here.

Next, install abigen, which is part of the geth-tools. The geth tools can be downloaded from the geth download site. For example, the current version is:
https://gethstore.blob.core.windows.net/builds/geth-alltools-linux-amd64-1.9.9-01744997.tar.gz

Now we can generate the go contract using the following commands:

solc --abi --output-dir Price.sol
solc --bin --output-dir Price.sol
abigen --bin Price.bin --abi Price.abi --pkg=price --out=Price.go

Copy the Price.go to the a folder named "generated" in the go application source folder.
In later stage we will automate this step as part of a Docker build.


3. Create a Go Application



The go application should handle the following:

  • Connect to the Ethereum network
  • Deploy the contract
  • Load the contract
  • Use the contract
To connect to the Ethereum network, we use the following:

url:= "ws://THE_ETHEREUM_NETWORK_IP_ADDRESS"
timedContext, _ := context.WithTimeout(context.Background(), 30*time.Second)
client, err := ethclient.DialContext(timedContext, url)
if err == nil {
  log.Fatalf("connection failed %v", err)
  return
}

Notice that we use a 30 seconds timeout based connection. Choose whatever timeout that fit your need. Also, replace the URL with your Ethereum  network address.
Notice that once all Ethereum actions are done, the ethclient should be close, so eventually call to:

client.Close()

To deploy the contract, we use the Price.go that we've generate before.

import (
  "crypto/ecdsa"
  "github.com/ethereum/go-ethereum/common"
  "github.com/ethereum/go-ethereum/crypto"
  "github.com/ethereum/go-ethereum/accounts/abi/bind"
  "github.com/ethereum/go-ethereum/common"
  "github.com/ethereum/go-ethereum/core/types"
  "github.com/ethereum/go-ethereum/ethclient"
  contract "sample.com/contract/generated"
)

func getAccountPrivateKey() *ecdsa.PrivateKey {
  // replace the private key here
  accountPrivateKey := "211dbaa6ca5e3fe1141eef3b00a0dd6d630a8d8e5bfbb7a7516865f1c746a3a0"
  privateKey, err := crypto.HexToECDSA(accountPrivateKey)
  if err != nil {
    log.Fatalf("private key to ECDSA failed: %v", err)
  }
  return privateKey
}

func getAccountPublicKey() common.Address {
  publicKey := getAccountPrivateKey().Public()
  publicKeyECDSA, ok := publicKey.(*ecdsa.PublicKey)
  if !ok {
    log.Fatalf("cannot assert type: publicKey is not of type *ecdsa.PublicKey")
  }

  address := crypto.PubkeyToAddress(*publicKeyECDSA)
  return address
}

func getTransactionOptions(client *ethclient.Client) *bind.TransactOpts {
  nonce, err := client.PendingNonceAt(context.Background(), getAccountPublicKey())
  if err != nil {
    log.Fatalf("get pending failed: %v", err)
  }

  gasPrice, err := client.SuggestGasPrice(context.Background())
  if err != nil {
    log.Fatalf("suggest gas price failed: %v", err)
  }

  transactOpts := bind.NewKeyedTransactor(getSystemAccountPrivateKey())
  transactOpts.Nonce = big.NewInt(int64(nonce))
  transactOpts.Value = big.NewInt(0)
  // you might need to update gas price if you have extremly large contract
  transactOpts.GasLimit = uint64(3000000)
  transactOpts.GasPrice = gasPrice
  return transactOpts
}

func Deploy(client *ethclient.Client) string {
  transactOptions := getTransactionOptions(client)

  address, transaction, _, err := contract.DeployPriceContract(transactOpts, client)
  if err != nil {
    log.Fatalf("deploy contract failed: %v", err)
  }
  
  track(transaction)

  return address.Hex()
}

To use this, update the private key to your owner used Ethereum account private key.
This account must have enough ETH balance to pay for the contract deploy.

This code also includes a call to a track(transaction) function.
We will review transaction tracking later.

Once the contract is deployed we get an address of the contract. This address should be saved out of the scope of the Ethereum, so that it can be reused by other remote/distributed clients.

Notice that the Deploy function returns an instance of the contract, and so we can use it to run the contract method. However I prefer not to use it, and instead to show how to load the contract using the contract address, since you will probably need this.

func LoadContract(client *ethclient.Client, contractAddress string) *contract.PriceContract {
  address := common.HexToAddress(contractAddress)
  instance, err := contract.NewPriceContract(address, client)
  if err != nil {
    log.Fatalf("could not load contract: %v", err)
  }
  return instance
}


Using a contract "call" method is very simple:

callOptions := bind.CallOpts{From: getSystemAccountPublicKey()}
price, err := contract.GetPrice(&callOptions)
if err != nil {
  log.Fatalf("get price failed: %v", err)
}

Using a "transaction" method is similar:

transactionOptions := getTransactionOptions(client)
transaction, err := contract.SetPrice(transactionOptions, big.NewInt(777))

if err != nil {
  Fatal("set price failed: %v", err)
}
track(transaction)

This code also includes a call to a track(transaction) function.

So, why do we need to track the transaction?
The transaction might fail. It also might not be mined at all due to low gas price.
If we want to know that the transaction is success, we need to check the status of it whenever a new block is mined.

This code section tracks the transaction:

func track(transaction *types.Transaction) {
  headers = make(chan *types.Header)
  var err error
  subscription, err = client.SubscribeNewHead(context.Background(), headers)
  if err != nil {
    log.Fatalf("subscribe to read blocks failed: %v", err)
  }
  defer subscription.Unsubscribe()

  for {
    select {
    case err := <-subscription.Err():
      log.Fatalf("subscription failed: %v", err)
    case <-headers:
      // got block, checking transaction
      transactionLocated := checkTransactionOnce(transaction)
      if transactionLocated {
        return 
      }
    case <-time.After(60*time.Second):
      log.Fatalf("timeout waiting for transaction")
    }
  }
}

func (wrapper *EthereumTransactionWrapper) checkTransactionOnce(transaction *types.Transaction) bool {
  _, pending, err := Client.TransactionByHash(context.Background(), transaction.Hash())
  if err != nil {
    log.Fatalf("get transaction failed: %v", err)
  }
  if pending {
    return false
  }

  receipt, err := client.TransactionReceipt(context.Background(), transaction.Hash())
  if err != nil {
    log.Fatalf("transaction receipt failed: %v", err)
  }
  if receipt.Status == 1 {
    return true
  }
  log.Fatalf("transaction failed with logs %v", receipt.Logs)
  // dead code
  return true
}

4. Create a Dockerfile



The docker file is a multi-stage docker file.
You might want to review: Use cache in a docker multi stage build for faster builds.

The Dockerfile includes 3 steps:

  1. compile the contracts
  2. compile the go application
  3. package the compiled go application

#==================
# Stage1: contracts
#==================

FROM ubuntu:18.04 as contracts-compiler

# install wget
RUN apt-get update && \
    apt-get install -y software-properties-common wget && \
    rm -rf /var/lib/apt/lists/*

# install solc
RUN add-apt-repository ppa:ethereum/ethereum && \
    apt-get update && \
    apt-get install -y solc

# install geth tools
RUN mkdir /geth_extract
ARG SOLIDITY_ALL_TOOLS
RUN wget --progress=dot:giga https://gethstore.blob.core.windows.net/builds/${SOLIDITY_ALL_TOOLS} -O /geth_extract/tools.tar.gz
RUN tar xvzf /geth_extract/tools.tar.gz -C /geth_extract
RUN rm /geth_extract/tools.tar.gz
RUN mv /geth_extract/* /geth_extract/extracted
RUN mv -v /geth_extract/extracted/* /usr/local/bin
RUN rm -rf /geth_extract

ADD ./src/PriceContract.sol /contracts/PriceContract.sol
WORKDIR /contracts
RUN solc --abi --output-dir /compiled PriceContract.sol
RUN solc --bin --output-dir /compiled PriceContract.sol

WORKDIR /compiled
RUN abigen --bin PriceContract.bin --abi PriceContract.abi --pkg=bouncer_contract --out=PriceContract.go

#================
# Stage2: compile
#================

FROM golang:1.12 AS go-compiler

RUN apt-get update && \
    apt-get install -y git

WORKDIR /src
ENV GOPATH=/go
ENV GOBIN=/go/bin

# get dependencies
COPY ["./src/go.mod", "./src/go.sum", "/src/"]
RUN GOARCH=amd64 GOOS=linux CGO_ENABLED=0 go mod download

# compile source
ADD ./src /src

# copy generated contract
RUN mkdir -p /src/internal/generated
COPY --from=contracts-compiler /compiled/PriceContract.go /src/generated/PriceContract.go

RUN GOARCH=amd64 GOOS=linux CGO_ENABLED=0 go build -a -installsuffix cgo -o my-go-application

#================
# Stage3: package
#================

FROM ubuntu:18.04
COPY --from=go-compiler /src/my-go-application /my-go-application
WORKDIR /
ENTRYPOINT ["/my-go-application"]


Summary

In this article we have reviewed how to use Ethereum contract in a Go based application, include deploy of the contract, using the contract methods, and building the application.
We've included a method of tracking the contract transactions.

Wednesday, December 4, 2019

Creating a kubernetes controller


This post explains about a kubernetes controller that I've created.
I've had reviewed several guides for kubernetes controller creation, such as Extending Kubernetes, Kubernetes Custom Controller, and this. I've found myself overwhelmed by the complexity of such a simple requirement. This post is the present the KISS gist of my blog. When you need to perform a task, ask yourself, does it really need to be so complex? Do I need all this monster for a simple task?

Many controllers are based on kubernetes Custom Resource Definition (aka CRD).
CRDs are indeed nice, but make the implementation much more complex, as you will have to use code generator tools.
If this is an internal solution, do you really need CRD? Why not use a simple ConfigMap?

In the solution presented below, I am using a ConfigMap that configures the number of StatefulSets to create. The ConfigMap also contains the template that will be used for the StatefulSet creation.
An example of the ConfigMap is:

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    resource-type: "primary"
  name: primary-config
  namespace: default
data:
  general.yaml: |-
    count: 2
  statefulSet.yaml: "apiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n  name: my-statefulset-___sequence___\n
    \ labels:\n    resource-type: \"secondary\"    \n    app.kubernetes.io/instance
    : bouncer\n    app.kubernetes.io/name : bouncer\nspec:\n  
 
 ...

The controller main code is the merge logic, which fetch the primary ConfigMap, and creates/delete StatefulSet accordingly. This is done by the following logic:

  • Get the primary ConfigMap
  • Load the existing StatefulSets
  • Loop until the required count in the primary ConfigMap
    • Create the StatefulSet if it does not exist
  • Delete all StatefulSets that were not encountered in the loop above

package mypackage

import (
  "fmt"
  "gopkg.in/yaml.v2"
  apps "k8s.io/api/apps/v1"
  core "k8s.io/api/core/v1"
  "k8s.io/client-go/kubernetes/scheme"
  "strconv"
  "strings"
  "sync"
)

type Merger struct {
  updates          chan int
  api              *K8sApi
  primaryConfig   *core.ConfigMap
  currentEntities  []apps.StatefulSet
  requiredEntities []apps.StatefulSet
}

func (merger *Merger) Init() {
  merger.updates = make(chan int, 100)
  go func() {
    for {
      select {
      case update := <-merger.updates:
        for len(merger.updates) > 0 {
          <-merger.updates
        }
        merger.merge()
      }
    }
  }()
}

func (merger *Merger) NotifyUpdate() {
  merger.updates <- MessageUpdate
}

func (merger *Merger) merge() {
  configMaps := merger.api.configMap.FetchByLabels("resource-type=primary")
  merger.primaryConfig = &configMaps[0]

  merger.currentEntities = merger.api.statefulSet.FetchByLabels("resource-type=secondary")
  merger.requiredEntities = []apps.StatefulSet{}
  chains := merger.getRequiredCount()
  for i := 0; i < chains; i++ {
    merger.mergeSequence(i)
  }
  merger.deleteNonUsedStatefulSets()
}

func (merger *Merger) deleteNonUsedStatefulSets() {
  usedNames := map[string]bool{}
  for _, s := range merger.requiredEntities {
    usedNames[s.Name] = true
  }
  for _, s := range merger.currentEntities {
    if !usedNames[s.Name] {
      merger.api.statefulSet.Delete(s.Name)
    }
  }
}

func (merger *Merger) mergeSequence(sequence int) {
  statefulSetText := merger.getStatefulSetUpdatedText(sequence, name)
  decode := scheme.Codecs.UniversalDeserializer().Decode
  obj, _, err := decode([]byte(statefulSetText), nil, nil)
  if err != nil {
    Fatal("decode statefulSetText failed, %v", err)
  }
  statefulSet, ok := obj.(*apps.StatefulSet)
  if !ok {
    Fatal("cast statefulSet failed")
  }

  merger.requiredEntities = append(merger.requiredEntities, *statefulSet)
  if merger.statefulSetExists(statefulSet.Name) {
    return
  }

  merger.api.statefulSet.Create(statefulSet)
}

func (merger *Merger) getStatefulSetUpdatedText(sequence int, name string) string {
  statefulSet := merger.primaryConfig.Data["statefulSet.yaml"]
  statefulSet = merger.templateReplace(statefulSet, "sequence", strconv.Itoa(sequence))
  statefulSet = merger.templateReplace(statefulSet, "name", name)
  replacements := merger.getGeneralConfig()[name]
  return statefulSet
}

func (merger *Merger) templateReplace(text string, from string, to string) string {
  return strings.Replace(text, fmt.Sprintf("___%v___", from), to, -1)

}

func (merger *Merger) getRequiredCount() int {
  generalConfig := merger.getGeneralConfig()
  countStr := generalConfig["count"]
  count, err := strconv.Atoi(fmt.Sprintf("%v", countStr))
  if err != nil {
    Fatal("parse of count %v failed %v", countStr, err)
  }
  return count
}

func (merger *Merger) getGeneralConfig() map[interface{}]interface{} {
  general := merger.primaryConfig.Data["general.yaml"]
  generalConfig := make(map[interface{}]interface{})
  err := yaml.Unmarshal([]byte(general), &generalConfig)
  if err != nil {
    Fatal("error parsing configMap yaml %v", err)
  }
  return generalConfig
}

func (merger *Merger) statefulSetExists(name string) bool {
  for _, statefulSet := range merger.currentEntities.statefulSets {
    if statefulSet.Name == name {
      return true
    }
  }
  return false
}

All we need now is to activate the NotifyUpdate upon change.
For this purpose we use the kubernetes watch API. We should watch for both the ConfigMap, and for StatefulSet changes. Watch for the StatefuleSet for example is done by the following:

func (api *K8sStatefulSetApi) WatchByLabels(labels string, notifier *Notifier) {
  listOptions := meta.ListOptions{
    LabelSelector: labels,
  }
  watcher, err := client.AppsV1().StatefulSets("default").Watch(listOptions)
  if err != nil {
    Fatal("watch kubernetes statefulSet failed: %v", err)
  }
  ch := watcher.ResultChan()
  go func() {
    Verbose("watcher loop for statefulSet established")
    for event := range ch {
      statefulSet, ok := event.Object.(*apps.StatefulSet)
      if !ok {
        Fatal("unexpected event in watcher channel")
      }

      Verbose("got event for statefulSet %v", statefulSet.Name)
      (*notifier)()
    }
  }()
}


Summary

In this post a simple kubernetes controller based on a ConfigMap was presented.
The primary resource is the configMap, marked by the label: "resource-type=primary"
The secondary resources are the StatefulSets, marked by the label: "resource-type=secondary"
This controller does not use CRD, hence it does not require any code generation tools, and allows KISS implementation.


Use cache in a docker multi stage build


Docker multi stage build (see docker official documentation) provides ability to both compile and create final image in a single docker file.
The logic is as follows:

Stage 1:
  • Use the related compiler docker image
  • Compile the sources
  • Create the final executable binary file

Stage 2:
  • Use the production related docker image (e.g. ubuntu/alpine)
  • Copy the executable binary file from stage 1
  • Run any additional required final image preparation steps


For example, a two-stages build for a GO image is:
# Stage1: compile
FROM golang:1.12 AS build-env
RUN apt-get update && apt-get install -y git

WORKDIR /src
ENV GOPATH=/go
ENV GOBIN=/go/bin
COPY ["./src/go.mod", "./src/go.sum", "/src/"]
RUN GOARCH=amd64 GOOS=linux CGO_ENABLED=0 go mod download
ADD ./src /src
RUN GOARCH=amd64 GOOS=linux CGO_ENABLED=0 go build -a -installsuffix cgo -o final-binary

# Stage2: package
FROM ubuntu:18.04
COPY --from=build-env /src/my-final-binary /final-binary
WORKDIR /
ENTRYPOINT ["/final-binary"]

However, when running the docker build, I've noticed that the first stage keeps running every time for a long time, even that I did not change anything in the source.
It is due to the fact that docker multi stage build does not keep cache for the intermediate steps, as described here.

The solution I've used is to split the docker build into two builds.
The first build docker image is not used, but exists only for caching purpose.
So the actual final build contains the following files:

  • Dockerfile_stage1
  • Dockerfile_stage2
  • build.sh
Let's review the files. First, the Dockerfile_stage1, which includes the first section of the original Dockerfile.

# Stage1: compile
FROM golang:1.12 AS build-env
RUN apt-get update && apt-get install -y git

WORKDIR /src
ENV GOPATH=/go
ENV GOBIN=/go/bin
COPY ["./src/go.mod", "./src/go.sum", "/src/"]
RUN GOARCH=amd64 GOOS=linux CGO_ENABLED=0 go mod download
ADD ./src /src
RUN GOARCH=amd64 GOOS=linux CGO_ENABLED=0 go build -a -installsuffix cgo -o final-binary

Next, the Dockerfile_stage2, which includes the second section of the original Dockerfile.

# Stage2: package
FROM ubuntu:18.04
COPY --from=build-env /src/my-final-binary /final-binary
WORKDIR /
ENTRYPOINT ["/final-binary"]

And last, the build.sh that does all the magic.

#!/usr/bin/env bash
docker build -d Dockerfile_stage1 -t "local/myimage-stage1:latest"

cat Dockerfile_stage1 > Dockerfile_full
cat Dockerfile_stage2 >> Dockerfile_full

docker build -d Dockerfile_full -t "local/myimage:latest"

rm -f Dockerfile_full


Summary

The docker not caching multi stage builds issue can be simply bypassed easily using the method described above. The method also avoids using complicated steps as described in other solutions, and it prevents from using duplicate code among files.