Localhost development isn't going anywhere, but it will look much different in the cloud native world.
At Google, I maintained open-source and local-first software – kubernetes/minikube, which runs a local Kubernetes cluster on your laptop, and skaffold, a docker-compose equivalent for Kubernetes (in addition to a few other open-source projects).
Cloud APIs might actually help standardize and abstract local developer workflows.
There's two conflicting forces in local-first software – the desire for fast feedback loops (build/deploy, the inner loop) and production-parity with our development environment (to avoid the dreaded: it works on my machine).
Inner loop: How fast does it take to verify your application behavior? Unit testing helps but is not sufficient. You need feedback in other ways, e.g., curl
an endpoint, test a deployment, visual feedback, etc. That means that developers need a way to build and deploy a service locally. However, with many layers of packaging (e.g., Docker)
Production parity: you often can't run the entire stack on your machine. Maybe you substitute PostgreSQL for SQLite in development or mock out API endpoints. Even if you can, you might not be able to replicate the proprietary code (or behavior) of cloud services (or networking). Realistically, it's not worth it to avoid all higher-order services (or third parties). You can use something like LocalStack to emulate them, but emulation isn't 1:1 with production code.
Docker and Kubernetes (the API) bring our development environments closer to production parity. The sell of skaffold
was to use the same build/deploy pipeline in development, CI, and production. You could easily switch (kubectl config set-context
) between deploying your applications to a local cluster (e.g., minikube
), a self-hosted one, or a managed cloud instance.
The APIs also solve some inner loop issues. How could development in Docker be faster than running the binary locally?
- No need to install dependencies (+production parity because they won't be different)
- Incremental-ish compilation for languages and build systems without it
- Common API for build (
docker build
) and deploy (kubectl apply
). Inskaffold
, we formalized these APIs and built different builders and deployers that worked for the same inner loop (e.g., you could build a container withbazel
or deploy withHelm
). - Compatibility with existing hot-reload development workflows
skaffold
has a built-in and configurable file-watcher and file-sync. Sync your interpreted files, rebuild on other source code changes, and redeploy on rebuilds and configuration changes.
But maybe the stickiest issue is the network. Localhost doesn't look anything like your production network.
Solving the network parity: Applications on your laptop running inside a virtual machine, inside Kubernetes, inside Docker. That's at least 3 layers of networking. Add a fourth if you want to connect to anything external.
When I was working on these tools at Google years ago, we didn't have the best options (and I wasn't smart enough to invent them). The best we came up with was automatic port-forwarding, which was error-prone but did the bare minimum. Today, there's a better solution: lightweight VPNs like Wireguard.
There's still a lot of work to be done: it's not that seamless to use right now (if you're developing in a container locally). But startups like Tailscale make it easy to make route services and pretend like you're inside a VPC or running something close to what you'd run in production.
If that gets realized, it might not matter what's running locally or not. Latency-sensitive editors could run locally, some parts of the stack running on your laptop, some emulated, and some in a development VPC on AWS. Hardware-dependent workloads could transparently move to cloud compute (e.g., training a model).