Peter Lyons

Securing local development with containers

July 13, 2018

Starting this Spring when I changed OSes from mac to linux, I decided to experiment with using docker containers to isolate dangerous development tools from my local system. Given the npm malware attack yesterday, this seems like a good time to write up my results so far.


OK I'll try not to get overly long-winded here, but let me just state broadly that I think the core fundamental idea in linux that you log in to your system with an effective userid, that userid has read/write access to your entire home directory (typically), and when you execute a program, it runs as your userid and therefore generally has read/write access to all of your files is utterly and fundamentally misguided and inappropriate. I have some designs that are basically the opposite of this, but I don't want to digress into that. Pragmatically, I wanted to find some way to mitigate this geologically-huge security vulnerability using existing tools and not going full-on Stallman.

To just clarify the specific vulnerability here, I'm talking about running commands like npm install and having that download code from the Internet, some of which is malicious, then executing that malicious code and having it do any one of the following nasty things:

Container Isolation Basic Approach

So when I had a clean slate I decided to try to mitigate this risk with the following basic tactic:

Setup script

The pattern is similar for most projects, but varies a little bit depending on the tech stack I'm working with (these days mostly node.js or rust), and the specific needs of the project in terms of tools, environment variables, network ports, etc.

Here's a representative sample for a node project. I typically check a file in as bin/ to fire up that project's docker container.

#!/usr/bin/env bash

# Please Use Google Shell Style:

# ---- Start unofficial bash strict mode boilerplate
set -o errexit    # always exit on error
set -o errtrace   # trap errors in functions as well
set -o pipefail   # don't ignore exit codes when piping output
set -o posix      # more strict failures in subshells
# set -x          # enable debugging

IFS="$(printf "\n\t")"
# ---- End unofficial bash strict mode boilerplate

cd "$(dirname "$0")/.."
exec docker run --rm --interactive --tty \
  --volume "${PWD}:/opt" \
  --workdir /opt \
  --env USER=root \
  --env PATH=/usr/sbin:/usr/bin:/sbin:/bin:/opt/node_modules/.bin \
  "node:$(cat .nvmrc)" "${1-/bin/bash}"

Here's some detail on what this does.

How well does it work?

So far all the basic stuff is working OK. Running npm works, running node works, debugging works OK, running an http server works. Terminal colors work. Arrow keys work. Bash history searching works (at least for a given shell session).

One gripe I have, which I could remedy I just haven't gotten around to it yet is in the container I have a vanilla bash configuration without my normal toolbox of zsh and dozens of aliases, functions, settings, etc. Usually I'm only running 3 or 4 commands in the container in a tight loop, and arrow keys and history searching work fine, so it's OK. However, bash history of commands in the container does not persist, so if I come up with a useful long command line, I need to take special action to capture it in a script or my notes.

Further isolation

This is where I am at the moment, but of course as with all security efforts, there's an endless list of additional measures that could be taken. Here's the next few things I plan to look at.

Right now I'm only running npm and node my-project.js within the container (or cargo for a rust project). I trust git a lot more than I do npm, but I think with git hooks ultimately the same vulnerability exists with git, so I'd like to run that in the container. However, there's a few kinks to work out in terms of filesystem userids, ssh agent access for pull/push, etc.

I hope you found this interesting and useful. Stay safe out there!