Monday, May 11, 2026

Run Lambda from AWS Bedrock Agent


 


In this post we will review the step required to create an AWS bderock agent that run a lambda function. I am creating this post since AWS documentation is not sufficient to achieve this goal. I guess most software engineers today assume that you will use repeatadly use LLM such as ChatGPT to solve their no-so-good product and documentation issues.


Create an Agent

Start with agent creation, you can follow the steps in this post.


Create Action Group

Notice:
Make sure to select a model that you have permissions for. For example, in case you choose one of Anthropic models, you will probably fail with permissions since you did not purchased this model. And yes, this error will be just a general error forcing you to look for the actual failure reason yourself.


Next we need to create an action group: 

  • Edit the agent, and under the action group section click add new action group
  • Select "Define with API schemas" as the action group type
  • Select "Quick create a new Lambda function" in the action group invocation
  • Select "Define via in-line schema editor" in the action group schema

An example schema is the following:

openapi: 3.0.0
info:
  title: Person Info API
  version: 1.0.0
  description: API to get the person info
paths:
  /get_person_info:
    get:
      summary: Gets the person info
      description: Gets the person info
      operationId: get_person_info
      responses:
        '200':
          description: Gets the person info
          content:
            'application/json':
              schema:
                type: object
                properties:
                  info:
                    type: string
                    description: The person info



Create the Lambda

After clicking save, we can click on the button "View" next to the "Select Lambda function" combo. This opens a new tab where we can edit the lambda code. An example working lambda code is:


import logging
from typing import Dict, Any
from http import HTTPStatus
import json

logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event: Dict[str, Any], context: Any) -> Dict[str, Any]:

try:
action_group = event['actionGroup']
api_path = event['apiPath']
http_method = event['httpMethod']
message_version = event.get('messageVersion', '1.0')

# Your business logic output
result = {
"info": "this Person ate the entire cake"
}

response = {
"messageVersion": message_version,
"response": {
"actionGroup": action_group,
"apiPath": api_path,
"httpMethod": http_method,
"httpStatusCode": 200,
"responseBody": {
"application/json": {
"body": json.dumps(result)
}
}
}
}

logger.info("Response: %s", response)
return response

except Exception as e:
logger.error("Error: %s", str(e))

return {
"messageVersion": "1.0",
"response": {
"actionGroup": event.get("actionGroup", ""),
"apiPath": event.get("apiPath", ""),
"httpMethod": event.get("httpMethod", ""),
"httpStatusCode": 500,
"responseBody": {
"application/json": {
"body": json.dumps({"error": str(e)})
}
}
}
}



Notice:
Do not use the lambda code example that you get, it contains bugs.

Using the Lambda


Using the lambda is not straight forward. You need to perform additional 3 steps:

1. Deploy
After each update the the lambda code, click on Deploy.
Then, on the top of the screen, click on Actions, Publish new version.

2. Permissions
You need to add permissions to the agent role. Notice the permission includes the deployed version, so you need to rerurn this after each deployment. An example for permissions update is:

aws lambda add-permission   \ 
  --function-name get_person_info-3urcv   \
  --statement-id allow-bedrock-invoke-version-1 \
  --action lambda:InvokeFunction   \
  --principal bedrock.amazonaws.com   \
  --source-arn arn:aws:bedrock:us-east-1:123460611234:agent/*   \
  --qualifier 2   \
  --region us-east-1



3. Configure the agent
Edit the agent, then edit the action group, and replace the used lambda version under "Select Lambda function". Then save AND prepare the agent to use the updated configuration.

Final Note

Fun, it was not, but eventually it is working. We can now chat with the agent from the AWS console and using code like the one in the agent creation post.

Sunday, May 3, 2026

Deploy a PostgreSQL Cluster on Kubernetes


 


In this post we will review the steps to deploy a PostgreSQL cluster on a kubernetes cluster. 

PostgreSQL is a veteran DBMS existing for ~30 years(!!). As such it is less a kubernetes player and does not automatically adjust the cluster like other technologies such as NATS and ClickHouse. Hence to maintain the cluster we require first to deploy an operator. There are several operators available, and in this post we use the CloudNativePG.


Cloud Native PG Operator

To deploy the CloudNativePG we can use the simple command:

kubectl apply -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.29/releases/cnpg-1.29.0.yaml

However, in case of need to include this as part of an umbrella helm chart, we should download the file and split it to multiple templates. The CRDs should be included in a different folder, for example:

my-umbrella-helm/charts/cnpg/crds

The CRDs folder is a special helm folder enabling deployment of the CRDs before any additional templates are rendered.

Other template can be deployed in the standard templates folder, for example:

my-umbrella-helm/charts/cnpg/templates

We would usually update the templates in this folder to match our own helm deployment standards such as entities names, images location, labels, enable/disable flags. Notice that CloudNativePG requires the label:

app.kubernetes.io/name: cloudnative-pg

So we must include this label when customizing the labels.

The PostgreSQL cluster

Once the operator is up and running, all we need to do is to render a cluster CRD, for example:

apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: postgresql
spec:
instances: 3

imageName: my-repo/my-postgresql/dev:18.3
imagePullPolicy: Always
storage:
size: 10Gi

postgresql:
parameters:
max_connections: "200"

bootstrap:
initdb:
database: my-db
owner: admin
secret:
name: postgresql-secret


We usually update the image name to be downloaded from our own repo. Notice the CloudNativePG blocks usage of "latest" version for the PostgreSQL cluster, so even on the local kind deployment we must use a specific version.

In the cluster resource we specify several important settings: the default database name, the default owner role of the database, and a secret with the password for that user.

Final Note

We've reviewed steps to setup a PostgreSQL cluster using the CloudNativePG operator. While not as easy to setup as other DBMS, PostgreSQL is considered one of the best relational DBMS with a good support for updates.