Roll your own PR preview CI pipeline

Goal We want a CI pipeline that will build and deploy an instance of our frontend application for every PR created in our frontend repo. Additionally, we want to be able to easily spin up applications with overridden configuration to allow developers to test the frontend against experimental backends. Finally, we want a reporting mechanism to inform developers when and where these deployed environments are available. Other Options Before you jump into this, consider that there are out-of-the-box solutions to solve this problem mentioned in the followup at the bottom of this page....

<span title='2021-11-21 00:00:00 +0000 UTC'>November 21, 2021</span>&nbsp;·&nbsp;13 min

Tips for working with a large number of files in S3

I would argue that S3 is basically AWS’ best service. It’s super cheap, it’s basically infinitely scalable, and it never goes down (except for when it does). Part of its beauty is its simplicity. You give it a file and a key to identify that file, you can have faith that it will store it without issue. You give it a key, you can have faith that it will return the file represented by that key, assuming there is one....

<span title='2020-05-30 00:00:00 +0000 UTC'>May 30, 2020</span>&nbsp;·&nbsp;7 min

Boilerplate for S3 Batch Operation Lambda

S3 Batch Operation provide a simple way to process a number of files stored in an S3 bucket with a Lambda function. However, the Lambda function must return particular Response Codes. Below is an example of a Lambda function written in Python that works with AWS S3 Batch Operations.

<span title='2019-12-20 00:00:00 +0000 UTC'>December 20, 2019</span>&nbsp;·&nbsp;1 min

Parsing S3 Inventory CSV output in Python

S3 Inventory is a great way to access a large number of keys in an S3 Bucket. Its output is easily parsed by AWS Athena, enabling queries across the key names (e.g. find all keys ending with .png) However, sometimes you just need to list all of the keys mentioned in the S3 Inventory output (e.g. populating an SQS queue with every keyname mentioned in an inventory output). The following code is an example of doing such task in Python:

<span title='2019-12-16 00:00:00 +0000 UTC'>December 16, 2019</span>&nbsp;·&nbsp;1 min

A PIL-friendly class for S3 objects

Here’s a quick example of creating an file-like object in Python that represents an object on S3 and plays nicely with PIL. This ended up being overkill for my needs but I figured somebody might get some use out of it.

<span title='2019-12-11 00:00:00 +0000 UTC'>December 11, 2019</span>&nbsp;·&nbsp;1 min