Full Blog TOC

Full Blog Table Of Content with Keywords Available HERE

Monday, November 24, 2025

CI/CD in a Shell


 

Recently I had to create a CI/CD for a new project whose source repository was in bitbucket. There are standard methods to handle this, using triggers from bitbucket, AWS CodeBuild, and AWS CodePipeline. However, I had only read permissions from the bitbucket and hence was limited in my ability to use the standard tools. I've decided to create CI/CD in a bash and surprisingly I've found it exteremly simple as well as lower cost and faster that the standard tools. I am aware of the downside of using scripts for such processes, such as lake of visibility, redundancy, and standards, but still the result was so good I think startup project should definitely consider it.

Listed below are the shell based CI/CD components.


The Poll Script

The poll script is running on a t3a.nano EC2 whose price is ~3$/month.

It polls the bitbucket repository every 5 minutes, and once a change on the deployment related branch is located, it starts the builder EC2 VM, and runs the build and deploy script.

#!/bin/bash

set -eE

instanceId=""
publicIp=""
intervalSeconds=300

cleanup() {
if [ -n "${instanceId}" ]; then
echo "Stopping instance: ${instanceId}"
if ! aws ec2 stop-instances --instance-ids "${instanceId}"; then
echo "Warning: Failed to stop instance ${instanceId}. Will retry on next run."
else
echo "Instance stopped successfully."
fi
instanceId=""
fi
}

restart_script() {
echo "Command '$BASH_COMMAND' failed with exit code $?"
cleanup
echo "Restarting soon..."
sleep ${intervalSeconds}
exec "$0" "$@"
}

trap 'restart_script "$@"' ERR


runBuild(){
trap cleanup RETURN

instanceId=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=my-builder-vm" \
--query "Reservations[*].Instances[*].InstanceId" \
--output text)

echo "Starting instance: ${instanceId}"
aws ec2 start-instances --instance-ids ${instanceId}

echo "Waiting for instance to be in 'running' state..."
aws ec2 wait instance-running --instance-ids ${instanceId}

publicIp=$(aws ec2 describe-instances \
--instance-ids ${instanceId} \
--query "Reservations[0].Instances[0].PublicIpAddress" \
--output text)

echo "Running build remote"
ssh -o StrictHostKeyChecking=no ec2-user@${publicIp} /home/ec2-user/build/my-repo/deploy/aws/production/deploy.sh

cleanup
echo "Build done"
}

checkOnce(){
echo "Check run time: $(date)"
commitFilePath=/home/ec2-user/build/last_commit.txt
latestCommit=$(git ls-remote git@bitbucket.org:my-project/my-repo.git my-deploy-branch | awk '{print $1}')
echo "Latest commit: ${latestCommit}"

lastCommit=$(cat ${commitFilePath} 2>/dev/null || echo "")
echo "Last deployed: ${lastCommit}"

if [ "${latestCommit}" != "${lastCommit}" ]; then
echo "New commit detected, starting build"
runBuild
echo "${latestCommit}" > ${commitFilePath}
echo "last commit updated"
else
echo "No new commits"
fi
}

while true; do
checkOnce
sleep ${intervalSeconds}
done


To make this script part of the poller VM instance startup, use the following:


sudo cat <<EOF > /etc/systemd/system/poll.service
[Unit]
Description=Poll Script Startup
After=network.target

[Service]
Type=simple
ExecStart=/home/ec2-user/build/poll.sh
Restart=on-failure
User=ec2-user
WorkingDirectory=/home/ec2-user/build
StandardOutput=append:/home/ec2-user/build/output.txt
StandardError=append:/home/ec2-user/build/output.txt

[Install]
WantedBy=multi-user.target
EOF


sudo systemctl daemon-reload
sudo systemctl enable poll.service # auto-start on boot
sudo systemctl start poll.service # start immediately


The Build Script - Step 1

The build script is running on c6i.4xlarge EC2 whose price is ~500$/month, but I don't care since this EC2 instance is running only during the deployment itself, so the price is very low here as well.


The script runs on the repository itself, which I've manually cloned once after the EC2 creation. It only pulls the latest version and runs another "step 2" script to handle the build. The goal is to be able to accept changes into "step 2" script as part of the git pull.


#!/bin/bash
set -e

cd /home/ec2-user/build/my-repo
git checkout my-deploy-branch
git pull

./deploy_step2.sh


The Build Script - Step 2

The "step 2" script does the actual work: 

  1. Increments the build number
  2. Builds the docker images
  3. Login to the ECR
  4. Push the images to ECR
  5. Push a new tag to the GIT
  6. uses `helm upgrade` to upgrade the production deployment.


Notice that the EC2 uses a role that enables it to access the ECR and the EKS without user and password, for example:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:UploadLayerPart",
"ecr:InitiateLayerUpload",
"ecr:PutImage"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"eks:DescribeCluster"
],
"Resource": "*"
}
]
}


The script is:

#!/bin/bash
set -e

export AWS_ACCOUNT=123456789012
export AWS_REGION=us-east-1
export AWS_DEFAULT_REGION=${AWS_REGION}
export EKS_CLUSTER_NAME=my-eks

rootFolder=/home/ec2-user/build
buildVersionFile=${rootFolder}/build_number.txt

if [[ -f "${buildVersionFile}" ]]; then
lastBuildNumber=$(cat "${buildVersionFile}")
else
lastBuildNumber=1000
fi
newBuildNumber=$((lastBuildNumber + 1))
echo "${newBuildNumber}" > ${buildVersionFile}
echo "Build number updated to: ${newBuildNumber}"

./build_my_images.sh

aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com
RemoteTag=deploy-${newBuildNumber} ./push_images_to_ec2.sh

newTag=deploy-${newBuildNumber}
git tag ${newTag}
git push origin ${newTag}

DEPLOY_VERSION=":${newTag}" ./helm_deploy.sh

echo "I did it again!"


Final Note

This build system is super fast. Why? Because it uses local cache for the docker images. This means we do not require a docker proxy to cache the images, which also makes it cheap.

To sum: don't use this for a big project, but you can use it for startups for sure.




Monday, November 10, 2025

Microsoft External Threat Detection


 


In this post we review the steps to create an external security provider to protect the Microsoft copilot studio based Agents.

Most of this post is based on this article.

Before starting, prepare yourself. Following Microsoft best practice, they've made it a super complex process, but in the end it it working, so that's good.


Provide a service

We start by implementing a service following this guide.

In general this service should provide 2 endpoints: /validate and /analyze-tool-execution.

The /validate endpoint is used only to check the service health and integration with Microsoft Authentication. For this post we will not implement Microsoft Authentication validation. Hence a simple implementation of the /validate is:



type ResponseSuccess struct {
IsSuccessful bool `json:"isSuccessful"`
Status string `json:"status"`
}

type Executor struct {
}

func (e *Executor) Execute(p web.Parser) interface{} {
log.Info("validate starting")
auth := p.GetHeader("Authorization")
log.Info("auth: %v", auth)
log.Info("validate done")
return &ResponseSuccess{
IsSuccessful: true,
Status: "OK",
}
}



The /analyze-tool-execution endpoint is activated in each step before the copilot agent invokes any action, and should approve or reject the action within 1 second (good luck with that). A simple example of implementation is:



type ResponseAllow struct {
BlockAction bool `json:"blockAction"`
}

type Executor struct {
}

func (e *Executor) Execute(p web.Parser) interface{} {
log.Info("analyze tool execution starting")

inputBytes, err := p.GetBodyAsBytes()
kiterr.RaiseIfError(err)

auth := p.GetHeader("Authorization")
log.Info("auth: %v", auth)
tenantId := kitjwt.GetJwtValue(auth, "tid")
applicationRegistrationId := kitjwt.GetJwtValue(auth, "appid")

log.Info("tenantId: %v", tenantId)
log.Info("applicationRegistrationId: %v", applicationRegistrationId)
log.Info("action description: %v", string(inputBytes))


log.Info("analyze tool execution done")
return &ResponseAllow{
BlockAction: false,
}
}

Once the service is implemented, deploy it and provide it with a valid TLS certificate. For the rest of this post we assume it is available in https://external.provider.com.


Register the Domain

Once the service is ready we need to register the domain in entra.microsoft.com.




Notice that as part of the process Microsoft requires you to prove you are the owner of the domain, so you need to add TXT record to the DNS server with a value specified by Microsoft.


App Registration

Create a new AppRegistration in entra.microsoft.com.
Then edit the AppRegistration and under "Expose an API" add the URL https://external.provider.com.

Next, edit the AppRegistration and under Certificates & secrets, select the Federated credentials tab, and add a new credential.
Scenario: Other
Issuer: https://login.microsoftonline.com/55fb1683-57de-46d1-8896-f9f3b07b549f/v2.0
Type: Explicit

The get the "Value" you need to run the following script:

# YOUR TENANT ID HERE
$guid = [Guid]::Parse("55fb1683-57de-46d1-8896-xxxxxxxx")
$base64Url = [Convert]::ToBase64String($guid.ToByteArray()).Replace('+','-').Replace('/','_').TrimEnd('=')
Write-Output $base64Url

# YOUR ENDPOINT ID HERE
$endpoint = "https://external.provider.com/analyze-tool-execution"
$base64Url = [Convert]::ToBase64String([Text.Encoding]::UTF8.GetBytes($endpoint)).Replace('+','-').Replace('/','_').TrimEnd('=')
Write-Output $base64Url


This script outputs 2 values, use them to create the following

/eid1/c/pub/t/FIRST_LINE_OUTPUT/a/m1WPnYRZpEaQKq1Cceg--g/SECOND_LINE_OUTPUT
 

Enable The Threat Detection

In https://admin.powerplatform.microsoft.com enable the threat detection.




Final Note

As promised, the super complex process is now done, and agents related events start streaming into the service which can approve or block them.


Monday, November 3, 2025

Microsoft AI Agent

 

In this post we will create an agent using Microsoft Copilot Studio.


Disclaimer About Microsoft

I've known Microsoft for more than 30 years. It used to be a monopoly company with good products, but overtime they lost their path and vision. What left of it is a monopoly company with bad products. Still, as a monopoly company Microsoft can force the market to use their new products even if they're bad and expensive. An good example for this is the creation of AI  agent using Microsoft Copilot studio.


License

Unlike other providers, to work with Microsoft products you need a license, regardless of the usage amount. This license is extremely compared with other providers. In case you only want to check abilities, and not willing to pay yet, check if you're eligible for the Microsoft E5 developer program.


Copilot Studio

Open the Microsoft Copilot Studio site, select Agents, and create a new agent.



Now we can use an AI to create our agent by simply describing the agent, or we can configure it manually.






Once we click on Create agent, the agent is ready, and we can test it.



We can also configure MCP server to be used under the tools section, for example I've used the public Docusign MCP server.






Once we're done, we can publish the agent, and since Microsoft is a monopoly company where office is the standard application for almost all companies, you can place the agent in an easily accessed location such as Microsoft Teams.


Purview Audit

A must-have method of an agent service is to track the chats, and tune its configuration to make it more useful. To track the chats we can use Microsoft Purview.

In Microsoft Purview site, select Solutions, Audit, and the run a new search filtered by time, and optionally filtered by RecordType=CopilotInteraction.





After some time between 5 minutes and 5 hours (it is Microsoft, so have no high expectations), you will get a search results record.




Final Note

We've seen how to use Microsoft products to create an agent and to track the agent actions. There are other related products that should be used to get a complete solution for agents creation, such as: Microsoft PowerApps, Microsoft PowerApps Admin, Microsoft Dataverse. And yes, you will need to pay for each one of these regardless of the usage amount. Have fun.