Contents

Service-Oriented Architecture with Docker and Caddy

Contents

When writing new services I pay attention to three things:

  • Ingress proxy and SSL termination
  • Deploying services
  • Persistent storage

Before Docker, setting up multiple services on a single host required complex port management to avoid conflicts, along with cumbersome SSL termination and certificate provisioning.

Here is how I configure an ingress proxy using the excellent lucaslorentz/caddy-docker-proxy project:

services:
  projects-edge:
    image: lucaslorentz/caddy-docker-proxy:ci-alpine
    ports:
      - 80:80
      - 443:443
    environment:
      - CADDY_INGRESS_NETWORKS=projects-ingress
    networks:
      - projects-ingress
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - caddy_data:/data
    restart: unless-stopped

networks:
  projects-ingress:
    external: true

volumes:
  caddy_data: {}

With docker, I can attach any number of services to the projects-ingress network on the same host. This allows routing requests from the edge to any container/port pair on this network.

I use a Taskfile to manage the ingress network and proxy lifecycle:

version: "3"

tasks:
  up:
    desc: "Bring up ingress caddy"
    cmds:
      - docker compose up -d --remove-orphans

  pull:
    desc: "Pull ingress caddy"
    cmds:
      - docker compose pull

  logs:
    desc: "Print docker logs"
    cmds:
      - docker compose logs -f

  setup:
    desc: Set up docker network for ingress
    status:
      - docker network inspect projects-ingress
    cmds:
      - docker network create projects-ingress

  down:
    desc: "Bring down proxy"
    cmds:
      - docker compose down

This setup allows the proxy to be shut down, while keeping the project-ingress network alive.

For example, this blog is deployed as follows:

services:
  titpetric-blog:
    image: nginx:mainline-alpine
    networks:
      - projects-ingress
    volumes:
      - ./site/public:/usr/share/nginx/html
    labels:
      caddy: titpetric.com
      caddy.reverse_proxy: "{{upstreams 80}}"

networks:
  projects-ingress:
    external: true

This setup supports running multiple services concurrently, avoiding port conflicts between them by defining the additional caddy labels for the domain routing. SSL/TLS certificate provisioning and connection termination is built into the edge caddy service.

Adding a database to this setup is straightforward:

services:
  timescaledb:
    image: timescale/timescaledb:latest-pg16
    restart: always
    networks:
      - storage
    expose:
      - 5432
    volumes:
      - /mnt/services-timescaledb:/var/lib/postgresql/data

networks:
  storage:

This starts a timescaledb (PostgreSQL) instance, also creating the storage network automatically. If a service uses timescaledb, then it may add this network into the networks: definition and connect to timescaledb or any other containers on the storage network.

Inventory management becomes the most important practice. For a scaled out deployment, a form of credential storage and ACL definitions becomes a necessary component of the system, rather than a hardcoded value.

To elaborate on a scaling example, to set up a stand-alone host, I’d promote the timescaledb configuration to use network_mode: host:

services:
  timescaledb:
    image: timescale/timescaledb:latest-pg16
    restart: always
    network_mode: host
    expose:
      - 5432
    volumes:
      - /mnt/services-timescaledb:/var/lib/postgresql/data

This in effect sets up a resource outside the current host. Where consumers used to connect to timescaledb:5432, the service configuration now has to reflect the correct host - either with a DNS record, a static IP address and credentials, or some form of service discovery mechanism.

With some planning, a simple config.json file can serve as a lightweight credential inventory and configuration mechanism, keeping complexity low. More fine-grained settings can be applied per service, being mindful of the Principle of least privilege.