The Twelve-Factor App
Twelve principles for portable, declaratively-configured services that thrive across modern hosting environments.
Where your software runs decides who pages you at 3 AM.
A running collection of writing, courses, and tutorials on hosting modern systems: cloud providers, PaaS, managed data services, containers, orchestration, networking, and the rest of the substrate. Less about which provider, more about the trade-offs you're committing to.
This index covers where your systems run. For what to build them with, see System Architecture. For keeping them healthy in production, see Production Operations.
Cross-cutting writings on hosting trade-offs that aren't tied to one tool.
Twelve principles for portable, declaratively-configured services that thrive across modern hosting environments.
Six pillars (ops, security, reliability, performance, cost, sustainability) for evaluating any cloud workload.
Interactive map of cloud-native projects categorized by layer, maturity, and license.
Bootstrap a Kubernetes cluster manually, lab by lab, to learn what the abstractions hide.
Open-access book treating warehouse-scale machines as the unit of design for modern cloud computing.
Weekly newsletter filtering AWS announcements through cost-economist snark and practitioner skepticism.
Three free books on running planet-scale systems: SRE, the Workbook, and Building Secure & Reliable Systems.
Signal handling, layer caching, tagging, and image hygiene rules from Google's container team.
37signals' founder argues cloud economics break down for stable, mid-sized workloads worth owning.
Benchmarked value comparison across Hetzner's Intel, AMD, and Ampere fleets with price-per-score tables.
Spend your limited innovation tokens carefully; default to well-understood infrastructure for everything else.
Engineering writeups on global app runtimes, anycast, Postgres replication, and edge compute trade-offs.
Peer tools grouped by what problem they solve. The intro before each list articulates the decision space; the list is what you actually choose between.
The choice is rarely "which cloud." It's how locked-in you can stomach. AWS has the deepest service catalog and the most painful exit cost. GCP and Azure offer competitive primitives with different pain points. The second tier (DigitalOcean, Hetzner, Vultr) wins when you're trading service depth for simpler billing and lower margins. Each major provider has its own managed data services in the section below.
Official onboarding hub with decision guides, cloud essentials, and first-build tutorials across AWS services.
Central hub for GCP product docs, quickstarts, architecture references, and code samples.
Microsoft's full Azure docs hub: getting-started paths, product catalog, SDKs, and architecture guidance.
Product docs for Droplets, App Platform, Managed Databases, Kubernetes, and developer tooling.
Hetzner's cloud product docs covering servers, networks, volumes, firewalls, billing, and API.
Independent benchmarks across Hetzner's CPX, CCX, and CAX fleets with cost-per-score recommendations.
Quickstarts, guides, and references for Vultr Compute, Managed Database, Kubernetes, and Object Storage.
Welcome path: learn OCI basics, create your first instances, and explore role-specific guides.
Hosting where you don't think about hosting. Fly, Railway, Render, Vercel each let you push code and get a URL. The differences are data tier, regional control, and how much they abstract. Pick by whether you want to think about regions, networking, and persistence, or specifically not think about them.
Install flyctl, fly launch, then learn Machines, Volumes, networking, and language-specific deploy guides.
Quick start, CLI, templates, and framework guides for deploying apps and databases on Railway.
Ship-your-first-app quickstarts plus configure and operate guides for services, databases, and Docker.
Framework deploys, Functions, Image Optimization, environments, and the broader AI-cloud platform.
Build, deploy, manage, and extend sites with Netlify's frameworks, Functions, and Edge Functions.
Language-organized guides for deploying apps, Postgres, pipelines, and the original Procfile / buildpack model.
Deploy static and full-stack apps with Git integration, Pages Functions, and Cloudflare's global network.
Postgres-as-a-service is the new default for most teams. Neon, Supabase, Tiger Data, and Crunchy Bridge each take different trade-offs on branching, vector, time-series, and pricing. PlanetScale runs MySQL with serverless branching. Turso runs distributed SQLite. Decision is usually which workload you're optimizing for, plus how much you trust their backup story. The database itself is a System Architecture decision; this list is about who runs it for you.
Serverless Postgres with autoscaling, branching, and instant restore; framework quickstarts included.
Postgres-backed BaaS: Database, Auth, Storage, Realtime, Edge Functions, and per-framework quickstarts.
Time-series Postgres: hypertables, continuous aggregates, columnstore compression, and Tiger Cloud.
Fully managed Postgres with dashboard, cb CLI, and REST API for connections, networking, and logging.
Docs for PlanetScale's Vitess-based MySQL and PostgreSQL platforms, deployments, branching, and pricing.
Embedded and cloud SQLite-compatible databases with vector search, sync, and AgentFS.
Serverless Redis, Vector, QStash, and Workflow with scale-to-zero, per-request pricing.
Managed open-source data services (Postgres, Kafka, ClickHouse, OpenSearch) across multiple clouds.
The container is the unit of deployment everywhere except where it isn't. Docker still owns mindshare; Podman is the rootless alternative. The registry choice matters more than the build tool. That's where bandwidth, image scanning, and supply-chain attacks live.
Essential learning path: install, build, run, and ship containers with the canonical Docker tooling.
Daemonless, rootless container engine; install, run, manage, network, and checkpoint containers.
Script OCI image builds without a daemon or Dockerfile. Install, tutorials, and release news.
Push, pull, and manage public/private images; webhooks, CI/CD integrations, and Trusted Content.
Authenticate, push, pull, tag, and label OCI images at ghcr.io tied to your GitHub repos.
User guides and API references for ECR private and public registries, IAM, and CLI usage.
Kubernetes is right when scaling out makes operating it cheaper than not. That's later than most teams reach for it. Docker Compose on a VM is often enough. Nomad is a serious alternative if you want orchestration without the YAML universe. Cloud Run and ECS sit in between: orchestration without the operational tax. Coolify and Kamal are the new wave for teams that want a single command to deploy.
Official tutorials: Kubernetes Basics, stateful apps, services, and ConfigMaps.
Checklist covering app health, scalability, observability, security, and resource governance.
Official learning path: install, run jobs, schedule services, batch jobs, and integrate Consul.
Get started defining multi-container apps with compose.yaml, networks, volumes, and profiles.
Official sample compose files: Django + Postgres, Flask + Redis, Nginx, and other common stacks.
ECS developer guide covering capacity (EC2, Fargate, Anywhere), task definitions, services, and scaling.
Run request- and event-driven containers serverlessly with quickstarts, custom domains, and authentication.
Self-hosted PaaS for apps, databases, and services. Heroku/Netlify-style UX on your own servers.
Install Kamal, run kamal init/setup, and deploy Dockerized apps to bare servers with zero downtime.
Install Dokku, configure SSH, and deploy your first app to a single-server open-source PaaS.
The proxy is where your traffic story lives: TLS, routing, rate limits, auth. Nginx and HAProxy are the battle-tested defaults. Caddy makes TLS automatic. Reach for Envoy when you're building a service mesh, Traefik when you're already in orchestrated containers.
Official intro: serving static content, reverse proxy, FastCGI, and load balancing basics.
Operator guide covering configuration patterns, hardening, performance, and debugging.
Run Traefik with Docker, discover services automatically, and route HTTP traffic.
Run Caddy as a static file server, reverse proxy, and HTTPS terminator with automatic TLS.
Caddyfile syntax, matchers, directives, and snippets for typical reverse-proxy setups.
Introduction to load balancing concepts, frontends, backends, and ACLs in HAProxy.
Canonical reference for every config directive: timeouts, health checks, stick tables, SSL.
Run Envoy in Docker, configure listeners, clusters, and basic HTTP routing.
Working Docker Compose examples for front proxy, gRPC bridge, JWT auth, and more.
Networking is the layer everyone forgets until it bites them. The big three are DNS (where your domain lives), CDN (where users hit before your origin), and private networking (how your services find each other without going through the internet). Tailscale changed the calculus on the last one: operating a private network is now as easy as adding users to a group.
Unified portal for DNS, CDN, WAF, Workers, R2, and Zero Trust products with code-first examples.
Domain registration, authoritative DNS routing, health checks, traffic flow, and VPC resolver.
Distribute content from AWS edges with origins, distributions, caching, and SaaS multi-tenant modes.
CDN, security, and edge Compute reference covering VCL, configuration, and platform APIs.
CDN, Stream, Storage, Optimizer, DNS, and Magic Containers quickstarts and reference docs.
Create a tailnet, install clients, add devices, and configure your first mesh in minutes.
Architecture writeup on WireGuard, the coordination server, NAT traversal, and DERP relay fallbacks.
Zero Trust access docs: connectors, resources, identity, policies, and replacing traditional VPNs.
Generate keys, configure interfaces, traverse NAT, and bring up a minimal WireGuard tunnel.
Sign up, create a network, install the client, authorize devices, and verify mesh connectivity.
The functional argument for serverless was "don't manage servers." The actual argument that won was "don't manage cold starts." Edge pushes compute closer to users at the cost of less local state. Pick when latency-to-user matters more than compute density, or when you specifically want stateless scale-to-zero pricing.
Serverless edge compute with Wrangler CLI, multi-language runtimes, and bindings to Cloudflare services.
Stateful Workers combining compute with storage, WebSocket hibernation, and scheduled alarms.
Run server-side code on Vercel with Fluid compute, autoscaling, and region-aware data locality.
Lambda developer guide: triggers, runtimes, permissions, scaling, layers, SnapStart, and VPC integration.
Serverless WebAssembly edge runtime; supported languages, deploy tooling, and logging integrations.
Serverless JS/TS platform with apps, KV, cron, environments, and the REST API reference.
For 99% of teams, the OS is whatever the platform gives you. The real decision points are long-term support cycles (Ubuntu LTS, Debian stable, RHEL/Rocky), minimal surface area (Alpine, Wolfi for hardened images), and reproducibility (NixOS if you've drunk that kool-aid).
Install, configure, secure, and administer Ubuntu Server LTS, covering networking, virtualization, and HA.
Installation guide, FAQ, release notes, admin handbook, and the broader Debian Documentation Project.
Install, configure, and develop with musl/BusyBox-based Alpine. The de facto minimal container base.
Guides, books, and labs for installing and operating the community RHEL-compatible enterprise Linux.
Install Nix, take first steps, and dig into the Nix, Nixpkgs, and NixOS manuals plus Nix Pills.
Wolfi is a container-native Linux undistro built for supply-chain security; intro and how it differs.
Engineers consistently publishing on cloud strategy, containers, and the production substrate.
Short answers grounded in the work of practitioners running real production systems.
Probably not yet. Kubernetes pays off when scaling out makes operating it cheaper than not (multi-region, many services, a dedicated platform team). For most teams, Docker Compose on a VM, a managed PaaS, or Cloud Run / ECS will run the same workload with a fraction of the operational tax. The honest question is whether you're choosing Kubernetes for the workload or for the résumé.
The alternatives are real. Hetzner gives you serious-grade compute at a fraction of AWS prices. Fly and Render handle most of what Elastic Beanstalk did with a tenth of the surface area. The honest reasons to choose AWS now are deep service integrations (RDS, S3, Lambda, EventBridge), enterprise contracts you're locked into, or a specific compliance bar. For most teams, the lock-in cost has gotten bigger relative to the alternatives.
Source: DHH: Why we're leaving the cloud
Almost always when you weigh in operational cost. Neon, Supabase, Tiger Data, and Crunchy Bridge each handle backups, point-in-time recovery, replication, and version upgrades. Self-hosting wins on cost at significant scale and when you need extensions the hosts don't support. The break-even is later than most teams think; the day you need an unplanned recovery, you'll be glad you outsourced.
Each is the right answer at a different point. A VM is the simplest substrate: predictable cost, full control, you do the ops. PaaS wins when you'd rather pay for someone else's ops. Containers are the unit of deployment if you have multiple services or environments. Serverless wins on idle cost and burst-able workloads, loses on long-running connections and warm-state. Default to the simplest thing that scales to your year-2 traffic, not your year-5 fantasy.
Lock-in is a cost. Every cost is fine if the value's higher. The question isn't avoiding lock-in, it's matching depth-of-integration to your exit cost tolerance. Use standard interfaces (Postgres, S3-compatible, OAuth, OpenTelemetry) wherever the value of vendor-specific features doesn't justify the migration cost. AWS's value is exactly the inverse: deep proprietary integrations. If you don't need them, don't pay the lock-in tax.
Source: Gregor Hohpe: Don't get locked up into avoiding lock-in
A single VM running Docker Compose, a managed Postgres, a managed Redis, and Cloudflare in front. That setup runs more production traffic than most teams will ever ship. Add secrets in a real manager (Vault, Doppler, 1Password), observability via Honeycomb or Logfire, and deploys via Kamal or GitHub Actions. You can run a real company on this stack and only outgrow it when you actually have to.
As soon as you have more than a couple of servers that need to talk to each other without going through the public internet. Tailscale gives you SSH access, internal services, and database connections from a developer's laptop without managing a bastion or a VPN. The cost is per-user pricing at scale; the value is that "connect to the internal network" becomes the same thing as "log in."
Source: Tailscale: How Tailscale works
Original writing coming.
Smarter Dev essays, walkthroughs, and short courses on hosting production systems will land here as they're written.
Join the Discord to be notifiedLast updated