Concurrent Python Example Script

Below is a very simple example of a script that I write and re-write more often than I would like to admit. It reads input data from a CSV and processes each row concurrently. A progress bar provides updates. Honestly, it’s pretty much just the concurrent.futures ThreadPoolExecutor example plus a progress bar.

February 19, 2021 · 1 min

An ECR Deployment Script

Below is a simple script to deploy a Docker image to ECR… 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 set -e log () { local bold=$(tput bold) local normal=$(tput sgr0) echo "${bold}${1}${normal}" 1>&2; } if [ -z "${AWS_ACCOUNT}" ]; then log "Missing a valid AWS_ACCOUNT env variable"; exit 1; else log "Using AWS_ACCOUNT '${AWS_ACCOUNT}'"; fi AWS_REGION=${AWS_REGION:-us-east-1} REPO_NAME=${REPO_NAME:-my/repo} log "🔑 Authenticating....

November 9, 2020 · 1 min

Using CloudFront as a Reverse Proxy

Alternate title: How to be master of your domain. The basic idea of this post is to demonstrate how CloudFront can be utilized as a serverless reverse-proxy, allowing you to host all of your application’s content and services from a single domain. This minimizes a project’s TLD footprint while providing project organization and performance along the way. Why Within large organizations, bureaucracy can make it a challenge to obtain a subdomain for a project....

October 2, 2020 · 10 min

How to generate a database URI from an AWS Secret

A quick note about how to generate a database URI (or any other derived string) from an AWS SecretsManager SecretTargetAttachment (such as what’s provided via a RDS DatabaseInstance’s secret property). 1 2 3 4 5 6 7 8 9 10 11 db = rds.DatabaseInstance( # ... ) db_val = lambda field: db.secret.secret_value_from_json(field).to_string() task_definition.add_container( environment=dict( # ... PGRST_DB_URI=f"postgres://{db_val('username')}:{db_val('password')}@{db_val('host')}:{db_val('port')}/", ), # ... )

June 15, 2020 · 1 min

Tips for working with a large number of files in S3

I would argue that S3 is basically AWS' best service. It’s super cheap, it’s basically infinitely scalable, and it never goes down (except for when it does). Part of its beauty is its simplicity. You give it a file and a key to identify that file, you can have faith that it will store it without issue. You give it a key, you can have faith that it will return the file represented by that key, assuming there is one....

May 30, 2020 · 7 min