Monday, October 25, 2021

Trigger an AWS CodeBuild Project by Push to CodeCommit


 

In this post we will review how to trigger an AWS CodeBuild project as an automatic response to AWS CodeCommit push.  See also my previous posts:

Personally, I've found it weird that this is not a builtin functionality in AWS system. I also found the documentation about it lacking a simple straight forward description of how to do it. I did found this post, but it was complicated, and a bit out of scope.

The general idea is to run a lambda that is triggered by a push to the CodeCommit. The lambda runs a NodeJS code that will lunch our CodeBuild project

We will use a CloudFormation stack to accomplish this. First we create a role. In the AssumeRolePolicyDocument element, we allow the AWS lambda service to use this role, and in the Policies element we grant the role permission to start any CodeBuild project.



triggerLambdaRole:
Type: AWS::IAM::Role
Properties:
Tags:
- Key: Name
Value: codecommit-trigger
Path: "/"
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Sid: AllowLambdaServiceToAssumeRole
Effect: Allow
Action:
- sts:AssumeRole
Principal:
Service:
- lambda.amazonaws.com
Policies:
- PolicyName: build-starter
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- codebuild:StartBuild
Resource:
- "*"



Next we create a lambda function, that uses the role mentioned above. The lambda function in this example starts 2 CodeBuild projects.



triggerBuildLambda:
Type: AWS::Lambda::Function
Properties:
Description: Start build upon CodeCommit trigger
Runtime: nodejs12.x
Role: !GetAtt triggerLambdaRole.Arn
Tags:
- Key: Name
Value: codecommit-trigger
Handler: index.handler
Code:
ZipFile: |
const AWS = require('aws-sdk')
const codeBuild = new AWS.CodeBuild()
exports.handler = async function(event, context) {
console.log("event received:\n" + JSON.stringify(event))
var promises = []
promises.push(codeBuild.startBuild({projectName: "my-project-1"}).promise());
promises.push(codeBuild.startBuild({projectName: "my-project-2"}).promise());
const results = await Promise.all(promises);
console.log("response is " + JSON.stringify(results));
}



The last section of the CloudFormation stack is to grant the CodeCommit permission to run the lambda function. 



Parameters:

ParameterAccountId:
Type: String
Default: "123456789012"

Resources:

permitCodeCommitToRunLambda:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !GetAtt triggerBuildLambda.Arn
Action: lambda:InvokeFunction
Principal: codecommit.amazonaws.com
SourceAccount: !Ref ParameterAccountId
SourceArn: !Join
- ":"
- - arn:aws:codecommit:us-east-1
- !Ref ParameterAccountId
- my-code-commit



Once this is done, we need to add trigger in the CodeCommit project to run the lambda function. I prefer not to update the CodeCommit project by a CloudFormation stack because I want to avoid removal of the stack by mistake. Unlike other stacks that can be easily recovered by rerunning them, a CodeCommit repo sources are lost forever.

To update the CodeCommit project, click on the CodeCommit project in AWS console, and click on settings on the left menu, and click on the Create Trigger button.




Next, select the events that you want to activate the lambda. I have chosen only the Push existing branch event. In the service section select AWS Lambda, and select the lambda that we have created. I strongly recommend using the Test Trigger button to check that all these components play well together.





Final Note

As a side comment, Google cloud does provide this functionality builtin. 

In general Google cloud supplies an easy way to do simple things, and AWS supplies a complicated way to do anything. I believe this is the key difference between these two cloud platforms.


Monday, October 18, 2021

Create AWS DynamoDB using CloudFormation and a Sample Golang Application



 


In this post we will use CloudFormation to setup a DynamoDB table, and then access it using a sample GO application.


To setup the DynamoDB we will use the following CloudFormation stack.


dynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
BillingMode: PAY_PER_REQUEST
TableName: "my-table"
AttributeDefinitions:
- AttributeName: "mykey"
AttributeType: "S"
KeySchema:
- AttributeName: "mykey"
KeyType: "HASH"


Notice that we specify the key attribute twice. The first time we define its type, which in this case is a string ("S").  The second time we specify that this is a key attribute. Other attributes will be automatically added by the DynamoDB once objects with new attributes are created. An exception for this is a "RANGE" attribute that if it is required, should be also specified here.

The billing mode is "PER REQUEST", which is great if you have no idea about the expected read/write load on the table.


Any other service that accesses the DynamoDB table, should be granted with permissions to access it, for example, to grant an ECS task role permission to access the DynamoDB table use the following rather too permissive  policy. In case of need, limit the actions to a smaller set, e.g:

  • dynamodb.GetItem
  • dynamodb.PutItem
  • dynamodb.Query


taskIamRole:
Type: AWS::IAM::Role
Properties:
RoleName: my-role
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: 'sts:AssumeRole'

taskIamPolicy:
Type: AWS::IAM::Policy
Properties:
PolicyName: my-policy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- dynamodb:*
Resource: arn:aws:dynamodb:*:*:table/my-table
Roles:
- !Ref taskIamRole



To access the DynamoDB from a GO application, first get a DynamoDB API interface:



import (
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/dynamodb"
"github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute"
)


config := aws.Config{
Region: aws.String("us-east-1"),
}

awsSession, err := session.NewSession(&config)
if err != nil {
panic(err)
}

dynamoDbApi := dynamodb.New(awsSession)



Now we can add items to the DynamoDB using the PutItem API. Notice that we define a structure with the names of the attributes that we want to have in the DynamoDB table.



item:= Item{
Key: "key1",
Data: 123,
}
mappedItem, err := dynamodbattribute.MarshalMap(item)
if err != nil {
panic(err)
}

query := dynamodb.PutItemInput{
Item: mappedItem,
TableName: aws.String("my-table"),
}

_, err = dynamoDbApi.PutItem(&query)
if err != nil {
panic(err)
}



Final Note


While it is not cheap, the AWS DynamoDB supplies an easy API, and great performance for an application. I recommend using it in case your DB API rate is moderate.

Wednesday, October 13, 2021

Build S3 Website



 


In this post we will review the steps to setup an AWS S3 based website.


First, let use a CloudFormation stack to setup the S3 bucket and configure it to have a public read access control.



Resources:
s3AgentSite:
Type: AWS::S3::Bucket
Properties:
AccessControl: PublicRead
BucketName: my-bucket-site-example
WebsiteConfiguration:
IndexDocument: index.html



Next, we can create a CodeBuild project to build our code. See my previous post about the setup of a CodeBuild project using CloudFormation. Notice that the CodeBuild project also requires permissions to update the S3 bucket, hence the CodeBuild stack should also include the following permissions:



- Effect: Allow
Action:
- s3:*
Resource:
- arn:aws:s3:::my-bucket-site-example/*



Now, let's handle the build itself. The buildspec.yaml specifies requirements to include nodejs as part of the build container.



version: 0.2

phases:
install:
runtime-versions:
nodejs: latest
build:
commands:
- ./build.sh



And the build script run the npm build, and then copies the results to the S3 bucket using aws sync CLI.



npm i
npm run build
aws s3 sync --acl public-read ./public s3://cto-c3-agent




Monday, October 11, 2021

Creating ECS service using CloudFormation




In this post we will review a CloudFormation stack to create an ECS service. 

Using ECS we can quickly deploy our docker based service on the cloud. In this example we will use the ECS Fargate mode which is a serverless deployment.


We start with a VPC deployment, including 2 subnets and and internet gateway allowing access to the internet. 



vpc:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0/16
Tags:
- Key: Name
Value: backend-vpc

subnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref vpc
CidrBlock: 10.0.1.0/24
AvailabilityZone: us-east-1a
Tags:
- Key: Name
Value: backend-subnet1

subnet2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref vpc
CidrBlock: 10.0.2.0/24
AvailabilityZone: us-east-1b
Tags:
- Key: Name
Value: backend-subnet2

internetGateway:
Type: AWS::EC2::InternetGateway
DependsOn: vpc
Properties:
Tags:
- Key: Name
Value: backend-igw

attachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref vpc
InternetGatewayId: !Ref internetGateway

routeTable1:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref vpc
Tags:
- Key: Name
Value: cuustomer-route-table1

routeTable2:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref vpc
Tags:
- Key: Name
Value: backend-route-table2

routeTableAssociate1:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref subnet1
RouteTableId: !Ref routeTable1

routeTableAssociate2:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref subnet2
RouteTableId: !Ref routeTable2

publicRoute1:
Type: AWS::EC2::Route
DependsOn: attachGateway
Properties:
RouteTableId: !Ref routeTable1
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref internetGateway

publicRoute2:
Type: AWS::EC2::Route
DependsOn: attachGateway
Properties:
RouteTableId: !Ref routeTable2
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref internetGateway

vpcSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
VpcId: !Ref vpc
GroupDescription: vpcSecurityGroup
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
Tags:
- Key: Name
Value: backend-vpc-security-group



An optional, but recommended step, is to add a bastion server, that is a server we can bash into, and check connections within the vpc.



Parameters:
LatestAmiId:
Type: AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>
Default: /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2

Resources:

bastionSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
VpcId: !Ref vpc
GroupDescription: bastionSecurityGroup
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
Tags:
- Key: Name
Value: bastion-vpc-security-group

bastionServer1:
Type: AWS::EC2::Instance
Properties:
InstanceType: t2.micro
ImageId: !Ref LatestAmiId
SubnetId: !Ref subnet1
KeyName: ec2
SecurityGroupIds:
- !Ref bastionSecurityGroup
Tags:
- Key: Name
Value: bastion-server1

elasticIP1:
Type: AWS::EC2::EIP
Properties:
Domain: vpc
InstanceId: !Ref bastionServer1
Tags:
- Key: Name
Value: bastion-elastic-ip1


The last thing we need to do is to create a load balancer allowing access to the service, and an ECS service that maintains our desired amount of containers. Note that the task definition includes the tag of the image that we want to run. In this case we use an image from AWS ECR. To view the logs of the containers, we configure a CloudWatch log group.

Notice that the security group enables access to the containers both from the load balancer, and from the bastion server.




loadBalancerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: loadBalancerSecurityGroup
VpcId: !Ref vpc
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0

loadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: backend-alb
Scheme: internet-facing
SecurityGroups:
- !Ref loadBalancerSecurityGroup
Subnets:
- !Ref subnet1
- !Ref subnet2

targetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Name: backend-vpc-target-group
Port: 8080
Protocol: HTTP
TargetType: ip
VpcId: !Ref vpc

loadBalancerListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
LoadBalancerArn: !Ref loadBalancer
Port: 80
Protocol: HTTP
DefaultActions:
- TargetGroupArn: !Ref targetGroup
Type: forward

taskExecutionRole:
Type: AWS::IAM::Role
Properties:
RoleName: backend-task-role
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy

cloudwatchLogsGroup:
Type: 'AWS::Logs::LogGroup'
Properties:
LogGroupName: backend-log-group
RetentionInDays: 3

taskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
NetworkMode: awsvpc
ExecutionRoleArn: !Ref taskExecutionRole
RequiresCompatibilities:
- FARGATE
Cpu: 256
Memory: 512
ContainerDefinitions:
- Name: origin
Image: MY-ACCOUNT-NUMBER.dkr.ecr.us-east-1.amazonaws.com/MY-IMAGE:latest
PortMappings:
- ContainerPort: 8080
Protocol: tcp
Environment:
- Name: PORT
Value: 8080
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref cloudwatchLogsGroup
awslogs-region: us-east-1
awslogs-stream-prefix: origin

ecsCluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: backend-cluster
CapacityProviders:
- FARGATE
- FARGATE_SPOT
DefaultCapacityProviderStrategy:
- CapacityProvider: FARGATE
Weight: 1
- CapacityProvider: FARGATE_SPOT
Weight: 1

containerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: containerSecurityGroup
VpcId: !Ref vpc
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 8080
ToPort: 8080
SourceSecurityGroupId: !Ref loadBalancerSecurityGroup
- IpProtocol: tcp
FromPort: 8080
ToPort: 8080
SourceSecurityGroupId: !Ref bastionSecurityGroup

ecsService:
Type: AWS::ECS::Service
DependsOn: loadBalancerListener
Properties:
Cluster: !Ref ecsCluster
DesiredCount: 2
TaskDefinition: !Ref taskDefinition
LaunchType: FARGATE
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: ENABLED
Subnets:
- !Ref subnet1
- !Ref subnet2
SecurityGroups:
- !Ref containerSecurityGroup
LoadBalancers:
- ContainerName: origin
ContainerPort: 8080
TargetGroupArn: !Ref targetGroup



Final Notes


In ECS fargate mode, we do not need to manage the EC2 instances, but we still need to manage the VPC. Somehow I expected the VPC to be auto managed by AWS in Fargate mode.

Also notice the the CloudWatch logs will "suffer" from a few minutes delay.

Thursday, October 7, 2021

Building Docker Images Using CodeBuild


 


In the previous post, we have used the CloudFormation to create all the AWS entities required for our CodeBuild project.

Now, we can finally write the actual docker build code.

We'll start with the simplest form of buildspec.yaml, which run a shell file from the CodeCommit Git repository, which is great, since we can update the build without updating anything in AWS itself, but by simply updating the source in the Git.


buildspec.yaml

version: 0.2

phases:
build:
commands:
- ./build_image.sh



The build images shell file does the following:

  • Authenticate to the ECR
  • Pull the previous built docker image to enable using docker cache for the current build
  • Builds the current image
  • Push the image to the ECR


build_image.sh

#!/bin/bash

set -e

ecr=${awsAccount}.dkr.ecr.${awsRegion}.amazonaws.com
localTag=${imageName}:${imageVersion}
remoteTag=${ecr}/${imageName}:${imageVersion}


log(){
message=$@
NOW=$(date +"%m-%d-%Y %T")
echo "${NOW} ${message}"
}

build(){
log "Pull cached image"
set +e
docker pull ${remoteTag}
set -e

log "Build docker image"
docker build --cache-from ${remoteTag} --tag ${localTag} images/${imageName}
docker tag ${localTag} ${remoteTag}

log "Push image"
docker push ${remoteTag}
}

authenticate(){
log "Authenticate to docker"
aws ecr get-login-password --region ${awsRegion} | docker login --username AWS --password-stdin ${ecr}
}


log "Starting build of image ${remoteTag}"
authenticate
build
log "Done"



Final Note


There are other items to cover as part of AWS CodeBuild, such as build triggers, wrappers to run builds and more. I will cover these in future posts.



Wednesday, October 6, 2021

Setup CodeBuild Project using CloudFormation


 


In this post we will review a CloudFormation stack to create all the required AWS resources for a new CodeBuild project.


We assume that we already have a GIT repository stored in AWS CodeCommit, and an included Dockerfile that builds the related image.


First we specify the stack parameters: 

  • The AWS account
  • The region
  • The CodeCommit name 
  • The image name to be build from this project



Parameters:

ParameterAccountId:
Type: String
Default: "123456789012"
ParameterRegion:
Type: String
Default: us-east-1
ParameterCodeCommitName:
Type: String
Default: my
ParameterImageName:
Type: String
Default: my-image



The stack now specifies the resources. We will review each resource.



Resources:



The first resource is the ECR. We create a dedicated ECR for this project.



ecrRepository:
Type: AWS::ECR::Repository
Properties:
RepositoryName: !Ref ParameterImageName



Another requirement is the S3 bucket that is required for storing of the CodeBuild logs. We will use a life-cycle policy to remove logs older than a week.



s3LogsBucket:
Type: AWS::S3::Bucket
Properties:
AccessControl: Private
BucketName: !Join
- "-"
- - my-codebuild-logs
- !Ref ParameterImageName
LifecycleConfiguration:
Rules:
- Id: DeleteOldFiles
Status: Enabled
ExpirationInDays: 7



Now we can create a role that the CodeBuild project uses. It includes a policy allowing the following:

  • Pull image from the ECR (for caching of previous build)
  • Push Image to the ECR
  • Pull the code from the CodeCommit
  • Save logs to the S3 bucket



codebuildProjectRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join
- "-"
- - my-cloudbuild-role
- !Ref ParameterImageName
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- sts:AssumeRole
Principal:
Service:
- codebuild.amazonaws.com
Policies:
- PolicyName: !Join
- "-"
- - my-bloudbuild-policy
- !Ref ParameterImageName
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- ecr:BatchCheckLayerAvailability
- ecr:BatchGetImage
- ecr:CompleteLayerUpload
- ecr:DescribeImages
- ecr:DescribeRepositories
- ecr:DescribeImageScanFindings
- ecr:GetAuthorizationToken
- ecr:GetDownloadUrlForLayer
- ecr:GetRepositoryPolicy
- ecr:InitiateLayerUpload
- ecr:ListImages
- ecr:ListTagsForResource
- ecr:PutImage
- ecr:UploadLayerPart
Resource:
- "*"
- Effect: Allow
Action:
- codecommit:GitPull
Resource:
- !Join
- ":"
- - arn:aws:codecommit
- !Ref ParameterRegion
- !Ref ParameterAccountId
- !Ref ParameterCodeCommitName
- Effect: Allow
Action:
- s3:*
Resource:
- !Join
- "/"
- - !GetAtt s3LogsBucket.Arn
- "*"



Finally we can configure the actual CodeBuild project, which uses the source from the CodeCommit, and run the actual CodeBuild according to the buildspec.yaml which is located in the root of the CodeCommit Git repository. We send environment variables for the buildspec.yaml, enabling it to use them in its shell commands.



codeBuildProject:
Type: AWS::CodeBuild::Project
Properties:
Name: !Join
- "-"
- - my-codebuild-project
- !Ref ParameterImageName
Source:
Type: CODECOMMIT
Location: !Join
- "/"
- - "https:/"
- !Join
- "."
- - git-codecommit
- !Ref ParameterRegion
- amazonaws.com
- v1/repos
- !Ref ParameterCodeCommitName

BuildSpec: buildspec.yaml
Artifacts:
Type: NO_ARTIFACTS
Environment:
Type: LINUX_CONTAINER
Image: aws/codebuild/standard:4.0
ComputeType: BUILD_GENERAL1_SMALL
PrivilegedMode: true
EnvironmentVariables:
- Name: awsRegion
Value: !Ref ParameterRegion
Type: PLAINTEXT
- Name: awsAccount
Value: !Ref ParameterAccountId
Type: PLAINTEXT
- Name: imageName
Value: !Ref ParameterImageName
Type: PLAINTEXT
- Name: imageVersion
Value: latest
Type: PLAINTEXT
ServiceRole: !GetAtt codebuildProjectRole.Arn
LogsConfig:
CloudWatchLogs:
Status: DISABLED
S3Logs:
Status: ENABLED
Location: !GetAtt s3LogsBucket.Arn
EncryptionDisabled: true


In the next post, I will provide an example for a buildspec.yaml, and an example of how build the image and push it to the ECR.