Getting a shell prompt in an existing, running Docker container

So, you have a container thats running, merrily sitting over there on your Docker server doing ... something.

And you want command prompt access to it.

Well, luckily, its fairly easy - you can run commands in existing and running containers by using:

$ sudo docker exec -i -t <ID> /bin/bash

Where is the ID or unique name of your running container (from docker ps).

Just remember that changes made to the container are not persistent - if you recreate the container from an image, any changes you made that weren't part of a connected volume will be reverted to whatever the image consisted of.

Continue Reading...

Nginx and LetsEncrypt via webroot

First of all, I will say that there is an Nginx plugin for the LetsEncrypt system which will configure your domains and everything, so you should look into that if you want something fully automated. In this quick intro guide I am looking at the webroot method, as that reflects my Docker based setup.

First of all, I create a Docker container from the standard Nginx image:

docker run \  
    -d \
    --restart=always \
    --name ingress-nginx \
    -p 80:80 \
    -p 443:443 \
    -v /storage/nginx/config/:/etc/nginx \
    -v /etc/letsencrypt:/etc/letsencrypt:ro \

Lets break that down a little:

  • -d - detach this container, don't run it interactively
  • --restart=always - always attempt a restart of this container
  • --name ingress-nginx - give it a useful name
  • -v /storage/nginx/config/:/etc/nginx - give it a volume for persistent storage of configuration data, I'm using a local directory
  • -v /etc/letsencrypt:/etc/letsencrypt:ro - give it access to the hosts LetsEncrypt configuration folder
  • nginx:1.13 - use the official Nginx image, version 1.13.x

If Nginx doesn't populate the configuration volume with a default set, then we need to go and grab some from a temporary Nginx container:

$ docker run -i -t --name temp-nginx nginx:1.13 
$ docker exec -i -t temp-nginx /bin/bash
root@99d7552dabc1:/# tar zcvf nginx.tgz /etc/nginx/*  
root@99d7552dabc1:/# exit  
$ cd /storage/nginx/config
$ docker cp temp-nginx:/nginx.tgz .
$ tar zxvf nginx.tgz
$ docker rm -f temp-nginx

My nginx.conf

Continue Reading...

Terraform and VMWare vSphere: A quick intro

This one is still very much a work in progress - the VMWare vSphere plugin is still very new and under development (it went from 0.3 to 0.4.1 over a weekend, leaving me a little confused when things suddenly changed).

First of all, you need a license for vSphere or ESXi which grants you API and datastore access. This comes in the 60 day trial license, but the free ESXi license does not include it.

Here's a simple vSphere orientated

# Configure the VMware vSphere Provider
provider "vsphere" {  
    vsphere_server = "${var.vsphere_vcenter}"
    user = "${var.vsphere_user}"
    password = "${var.vsphere_password}"
    allow_unverified_ssl = true

data "vsphere_datacenter" "my-dc" {}

resource "vsphere_folder" "standalone" {  
    path = "standalone"
    datacenter_id = "${}"
    type = "vm"

resource "vsphere_virtual_machine" "plex-server" {  
    name   = "plex-server"
    hostname = "plex-server"
    folder = "${vsphere_folder.standalone.path}"
    vcpu   = 4
    memory = 8192
    #domain = "${var.vsphere_domain}"
    datacenter = "${var.vsphere_datacenter}"
    cluster = "${var.vsphere_cluster}"

    # Define the Networking settings for the VM
    network_interface {
        label = "VM Network"
        ipv4_gateway = ""
        ipv4_address = ""
        ipv4_prefix_length = "24"

    dns_servers = ["", ""]

    # Define the Disks and resources. The first disk should include the template.
    disk {
        template = "${var.template_name}"
        datastore = "datastore1"
        type ="thin"

    disk {
        datastore = "datastore1"
        type ="thin"
        name = "plex-store"
        keep_on_remove = "true"
        size = "50"

    # Define Time Zone
    time_zone = "Pacific/Auckland"

Very simply, this does the following:

  1. Sets up the vSphere Terraform plugin (more

Continue Reading...

Nginx, docker push and 404 errors

Recently after trying some more elaborate Docker image builds, I discovered I had an issue with my Docker Registry setup (as described in Deploying a secure public Docker registry with Nginx and Traefik), which was manifesting itself as random 404 page not found errors:

$ docker --debug  push my.private.registry/cant-think-of-a-container-name:latest

The push refers to a repository [my.private.registry/cant-think-of-a-container-name]  
5ae877b9fd02: Pushing  3.164MB  
34be67ece980: Layer already exists  
9411f02cc27c: Pushing  1.092MB/296.6MB  
5df38987acef: Pushing    290MB/290MB  
683c7228321d: Layer already exists  
e6562eb04a92: Pushing  122.9MB/122.9MB  
596280599f68: Pushing  44.64MB/44.64MB  
5d6cbe0dbcf9: Pushing  123.4MB/123.4MB  
Error: Status 404 trying to push repository cant-think-of-a-container-name: "404 page not found\n"  

If you check the docker push logs (in debug mode), the 404 page not found error actually relates to an attempt to access something in the /v1/... path on the registry - which is odd because the Docker registry protocol has been v2 for, like, ever now.

So whats going on?

Looking at the logs from Nginx, it becomes quite clear - Nginx itself is throwing a http 413 error because the PUT and POST bodies being sent by docker push are well in excess of the size allowed as "valid" request bodies by my Nginx setup.

docker push doesn't really understand the http 413 error, and tries to fall back to the v1 protocol in desperation - personally, I'd have called it a day and gone down the pub, but docker didn't, poor little

Continue Reading...

Giving Jenkins access to Docker

In order to be able to build and run Docker containers within a Jenkins BlueOcean pipeline, you need access to a Docker installation - you can either install Docker within the Jenkins container, the so called "Docker in Docker" method, or you can give it access to the Docker setup that Jenkins itself is running under, to create so called "sibling" containers.

I will be doing the latter.

The first thing to do is to create the Jenkins container with the right volumes mounted:

docker run \  
    -d \
    --restart=always \
    --name jenkins \
    -p 80:8080 \
    -p 50000:50000 \
    -v /storage/jenkins/:/var/jenkins_home \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v $(which docker):/usr/bin/docker \

Lets break that down a little:

  • -d - detach this container, don't run it interactively
  • --restart=always - always attempt a restart of this container
  • --name jenkins - give it a useful name
  • -v /storage/jenkins/:/var/jenkins_home - give it a volume for persistent storage of data, I'm using a local directory
  • -v /var/run/docker.sock:/var/run/docker.sock - give it access to the hosts Docker socket, so it can talk to the Docker engine on the host
  • -v -v $(which docker):/usr/bin/docker - give it access to the hosts Docker commands (you could alternatively add these to the Jenkins image)
  • jenkins:2.32.3 - use the official Jenkins image, version 2.32.3

Once Jenkins is up and running,

Continue Reading...

Configuring BIND 9 to forward to Consul

Consul is one of those services which has many features, including service discovery through its catalog, configuration storage through its key-value store, and also it acts as a normal DNS service if queried on the right port.

Consul by default listens on port 8600 for DNS queries, and you can issue a normal dig command to query it, so if we have a service registered with the following json:

  "Name": "sample-service",
  "Address": "",
  "Port": 80,
  "Tags": []

Then we can issue the following dig query to get a response (assuming is the Consul service):

dig @ -p 8600 sample-service.service.consul. ANY  

This should return something similar to:

; <<>> DiG 9.10.3-P4-Ubuntu <<>> @ -p 8600 sample-service.service.consul. ANY
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64305
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;sample-service.service.consul.    IN  ANY

sample-service.service.consul. 0    IN  A

;; Query time: 2 msec
;; WHEN: Fri Mar 03 22:45:31 GMT 2017
;; MSG SIZE  rcvd: 62

What this means is that you can use standard DNS queries to discover your services, so long as you use standard ports for them (as ports

Continue Reading...

Traefik and Lets Encrypt with Consul and ConsulCatalog

In this post we will be exploring the deployment of Traefik in a multi-node Docker Swarm environment, using Consul for service configuration and Lets Encrypt for HTTPS SSL certification provision.

The gotcha factor here is getting a multi-node Traefik service to use the same Lets Encrypt certificate store, so you don't keep hammering the Lets Encrypt servers each time a different node is hit.

The issue in a Docker Swarm is that storage is not shared across nodes - the standard local volume driver is local to each node, meaning data isn't shared, so you either have to resort to one of the more complex volume drivers (which invariably require a backend like AWS, Azure or something equivalent locally), or something like NFS which is insecure in this day and age.

Here, I am going to show you how to set up Traefik in such a way that it uses the Consul service catalog for its service discovery and configuration, but also so it uses the Consul key-value store to store the Lets Encrypt certificates so that all the Swarm nodes have access to the same information.


  • You must have a Docker Swarm installed and configured to properly run Docker in a multi node setup


First, lets set up Consul - if you already have Consul running, congratulations, you can skip this step :)

We will setup Consul directly on a Docker host, not as part of the Swarm (you could run it on the Swarm, but I don't

Continue Reading...

Deploying A Secure Public Docker Registry with Nginx and Traefik

These posts are primarily intended for a technical audience who are trying to deploy a Docker setup on a minimal budget, and for those who typically do not have the luxury of a large enterprise setup - it is ideal for someone who is setting up their own development environment at home or in an isolated capacity.

To set the scene, I am running off a non-commercial internet connection which only has a single IP address - this means that I have to be somewhat creative with regard to running multiple services. I have multiple Docker hosts on my local network, running different services and configurations, including at least one Swarm with multiple nodes.

My decision to use both Nginx and Traefik are based on the fact that Nginx can supply the .htpasswd support for basic authentication, while Traefik can supply the automated Lets Encrypt certificate management for HTTPS support, as well as being easy to configure dynamically.

In this post we will be exploring the deployment of a secured Docker Registry, with authentication and HTTPS support. In order to do that we will be using both Nginx and Traefik, as they supply different parts of the puzzle.


  • You must have a Docker host installed and configured to properly run Docker
  • You must have a public domain name or subdomain which points at your internet public IP address
  • Your internet public IP address should ideally be static, or you should be using a dynamic DNS service
  • Your internet router
Continue Reading...

Cloning a VMWare vmdk in VSphere

Ok, so you are setting up a lot of identical Linux virtual machines, and the easiest way to do that is to clone the disk for use in a new virtual machine configuration:

vmkfstools -i /vmfs/volumes/<DATASTORE_NAME>/<SOURCE_VMDK_NAME>.vmdk /vmfs/volumes/<DATASTORE_NAME>/<DESTINATION_VMDK_NAME>.vmdk  

This will create an exact replica of the disk, so all you need to do now is create a new VM, attach this disk, boot it up and change /etc/hosts, /etc/hostname and the IP address (if not DHCPed).

Much quicker than configuring Ubuntu for an automated install!

Continue Reading...

Dump a PostgreSQL Database for Backup

This is a short post - how to dump a named PostgreSQL database from a Docker container, primarily for backup purposes.

docker exec -t <CONTAINER_NAME_OR_ID> pg_dump -c -U postgres <DATABASE_NAME> | gzip > /backup/Docker/<DATABASE_NAME>_`date +%Y-%m-%d"_"%H%M%S`.gz  

This will output a tar and gzipped file ready to be imported into PostgreSQL.

Continue Reading...