A PIL-friendly class for S3 objects

Here's a quick example of creating an file-like object in Python that represents an object on S3 and plays nicely with PIL. This ended up being overkill for my needs but I figured somebody might get some use out of it.


Using CloudFormation's Fn::Sub with Bash parameter substitution

Let's say that you need to inject a large bash script into a CloudFormation AWS::EC2::Instance Resource's UserData property. CloudFormation makes this easy with the Fn::Base64 intrinsic function:

AWSTemplateFormatVersion: '2010-09-09'

Resources:
  VPNServerInstance:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: ami-efd0428f
      InstanceType: m3.medium
      UserData:
        Fn::Base64: |
          #!/bin/sh
          echo "Hello world"

In your bash script, you may even want to reference a parameter created elsewhere in the CloudFormation template. This is no problem with Cloudformation's Fn::Sub instrinsic function:

AWSTemplateFormatVersion: '2010-09-09'

Parameters:
  Username:
    Description: Username
    Type: String
    MinLength: '1'
    MaxLength: '255'
    AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*'
    ConstraintDescription: must begin with a letter and contain only alphanumeric
      characters.

Resources:
  VPNServerInstance:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: ami-efd0428f
      InstanceType: m3.medium
      UserData:
        Fn::Base64: !Sub |
          #!/bin/sh
          echo "Hello ${Username}"

The downside of the Fn::Sub function is that it does not play nice with bash' parameter substitution expressions:

AWSTemplateFormatVersion: '2010-09-09'

Parameters:
  Username:
    Description: Username
    Type: String
    MinLength: '1'
    MaxLength: '255'
    AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*'
    ConstraintDescription: must begin with a letter and contain only alphanumeric
      characters.

Resources:
  VPNServerInstance:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: ami-efd0428f
      InstanceType: m3.medium
      UserData:
        Fn::Base64: !Sub |
          #!/bin/sh
          echo "Hello ${Username}"
          FOO=${FOO:-'bar'}

The above template fails validation:

$ aws cloudformation validate-template --template-body file://test.yaml

An error occurred (ValidationError) when calling the ValidateTemplate operation: Template error: variable names in Fn::Sub syntax must contain only alphanumeric characters, underscores, periods, and colons

The work-around is to rely on another intrinsic function: Fn::Join:

AWSTemplateFormatVersion: '2010-09-09'

Parameters:
  Username:
    Description: Username
    Type: String
    MinLength: '1'
    MaxLength: '255'
    AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*'
    ConstraintDescription: must begin with a letter and contain only alphanumeric
      characters.

Resources:
  VPNServerInstance:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: ami-efd0428f
      InstanceType: m3.medium
      UserData:
        Fn::Base64: !Join
          - '\n'
          - - !Sub |
              #!/bin/sh
              echo "Hello ${Username}"
            - |
              FOO=${FOO:-'bar'}

This allows you to mix CloudFormation substitutions with Bash parameter substititions.


Bonus

While we're talking about CloudFormation, another good trick comes from cloudonaut.io regarding using a Optional Parameter in CloudFormation.

Parameters:
  KeyName:
    Description: (Optional) Select an ssh key pair if you will need SSH access to the machine
    Type: String

Conditions:
  HasKeyName:
    Fn::Not:
    - Fn::Equals:
      - ''
      - Ref: KeyName

Resources:
  VPNServerInstance:
    Type: AWS::EC2::Instance
    Properties:
      ImageId: ami-efd0428f
      InstanceType: m3.medium
      KeyName:
        Fn::If:
          - HasKeyName
          - !Ref KeyName
          - !Ref AWS::NoValue

Note that the KeyName has Type: String. While Type: AWS::EC2::KeyPair::KeyName would likely be a better user experience as it would render a dropdown of all keys, it does not allow for empty values:

... if you use the AWS::EC2::KeyPair::KeyName parameter type, AWS CloudFormation validates the input value against users' existing key pair names before it creates any resources, such as Amazon EC2 instances.


Serve an Esri Web AppBuilder web app from HTTP

How to serve an Esri Web AppBuilder web app from HTTP

When an Esri Web AppBuilder web app is configured with a portalUrl value served from HTTPS, the web app automatically redirects users to HTTPS when visited via HTTP. While this is best-practice in production, it can be a burden in development when you want to quickly run a local version of the web app. Below is a quick script written with Python standard libraries to serve a web app over HTTP. It works by serving a config.json that is modified to use HTTP rather than HTTPS. This allows you to keep config.json using the HTTPS configuration for production but serve the web app via HTTP during development.

The script should be saved alongside the config.json in the root of the web app. I would recommend running chmod a+x runserver to enable you to execute the server directly via ./runserver. Alternatively, you could install this somewhere on your system path to invoke from any directory (something like cp runserver /usr/local/bin/serve-esri-app for a unix-based system).


Hosting Jupyter at a subdomain via Cloudflare

Full Disclosure: I am NOT an expert at Jupyter or Anaconda (which I am using in this project), there may be some bad habits below...

Below is a quick scratchpad of the steps I took to serve Jupyter from a subdomain. Jupyter is running behind NGINX on an OpenStack Ubuntu instance and the domain's DNS is set up to use Cloudflare to provides convenient SSL support. I was suprised by the lack of documentation for this process, prompting me to document my steps taken here.

Cloudflare

  1. Set up Cloudflare account, utilizing its provided Name Servers with my domain registration.
  2. Set up Cloudflare DNS Record for subdomain (ex jupyter to server from jupyter.mydomain.com). In the image below, the DNS entry for the Jupyter server was "greyed-out", relegating it to "DNS Only" rather than "DNS and HTTP Proxy (CDN)".. Now that Cloudflare supports websockets, this is no longer necessary and you're able to take advantage of using Cloudflare as a CDN (admittedly, I'm not sure how useful this actually is, but it's worth mentioning). Setting up DNS Record
  3. Ensure Crypto settings are set correctly. You should probably be using Full SSL (Strict) rather than Flexible SSL as shown in the image below, however that is outside the scope of this post. SSL Settings Auto-rewrite to HTTPS

Install Anaconda

Follow instructions described here.

Set up an Upstart script

On the server, you'll want Jupyter to start running as soon as the server starts. We'll use an Upstart script to acheive this.

# /etc/init/ipython-notebook.conf
start on filesystem or runlevel [2345]
stop on shutdown

# Restart the process if it dies with a signal
# or exit code not given by the 'normal exit' stanza.
respawn

# Give up if restart occurs 10 times in 90 seconds.
respawn limit 10 90

description "Jupyter / IPython Notebook Upstart script"
setuid "MY_USER"
setgid "MY_USER"
chdir "/home/MY_USER/notebooks"

script
    exec /home/MY_USER/.anaconda3/bin/jupyter notebook --config='/home/MY_USER/.jupyter/jupyter_notebook_config.py'
end script

Configure Jupyter

Populate Jupyter with required configuration. You should probably auto-generate the configuration first and then just change the applicable variables.

# .jupyter/jupyter_notebook_config.py
c.NotebookApp.allow_origin = 'https://jupyter.mydomain.com'
c.NotebookApp.notebook_dir = '/home/MY_USER/notebooks'
c.NotebookApp.open_browser = False
c.NotebookApp.password = 'some_password_hash'
c.NotebookApp.port = 8888
c.NotebookApp.kernel_spec_manager_class = "nb_conda_kernels.CondaKernelSpecManager"
c.NotebookApp.nbserver_extensions = {
  "nb_conda": True,
  "nb_anacondacloud": True,
  "nbpresent": True
}

Wire Jupyter up with Nginx

To be able to access Jupyter at port 80, we'll need to reverse proxy to the service. Nginx can take care of this for us. Jupyter uses websockets to stream data to the client, so some

# /etc/nginx/sites-enabled/jupyter.conf
# Based on example: https://gist.github.com/cboettig/8643341bd3c93b62b5c2
upstream jupyter {
    server 127.0.0.1:8888 fail_timeout=0;
}

 map $http_upgrade $connection_upgrade {
     default upgrade;
     '' close;
 }

server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;

    # Make site accessible from http://localhost/
    server_name localhost;

    client_max_body_size 50M;

    location / {
        proxy_pass http://jupyter;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    location ~* /(api/kernels/[^/]+/(channels|iopub|shell|stdin)|terminals/websocket)/? {
        proxy_pass http://jupyter;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # WebSocket support
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
    }
}