What Small PHP Teams Don’t Need
I’ve been deploying PHP applications in small team settings for a while now, and a question that keeps coming up is whether we should be using Docker, Kubernetes, or something similar. I wanted to write down how I think about that question, because the answer depends a lot on the kind of projects you’re working on.
My deployment tool is rsync. Files go to each server over SSH, a symlink flips, and the new code is live. I’ve tried Docker, Kubernetes, and Docker Swarm over the years, and each time I came back to rsync because it fit my situation better than any of them.
How I actually deploy
Rsync copies changed files to each server over SSH. Each deploy goes into a timestamped release directory, and once dependencies are installed and migrations have run, a symlink flips to make it live. The previous release is still sitting there. If something breaks, I flip the symlinks back and I’m running the old code in seconds.
for server in web1 web2 web3 web4; do
rsync -avz --delete ./src/ deploy@$server:/var/www/myapp/releases/20260210_1430/
done
# ... composer install, migrations, symlink flip on each serverA typical project has four or five servers behind a load balancer, and the deploy script loops through all of them in under a minute. Rolling back is instant because it only changes symlinks. This setup hasn’t held me back.
Docker
Docker is great when you need environment consistency across a large team, when multiple apps on the same server need different runtimes, or when you have several services that need to discover and talk to each other through Compose’s networking. I don’t have any of those problems right now.
When my services need to talk to each other they go through
internal load balancers with keepalived for hot failover. Each
service knows one hostname and path for the load balancer, and
that’s it. Secrets live in .env files deployed with
the code. The servers stay consistent because I provision them the
same way, and scaling up means adding new servers to the load
balancer backend. Docker’s service networking and Kubernetes’s
secrets management solve the same problems more dynamically, but a
load balancer and .env files already cover it and I’d
rather not take on Dockerfiles, compose files, an image registry,
and a different debugging workflow on top of what works.
Docker Swarm adds orchestration on top: rolling deploys, service scaling, load balancing across machines. My load balancer already distributes traffic, and a deploy script that loops through servers handles the rest.
Kubernetes
Kubernetes makes sense when you have a complex topology with dozens of services that need to find each other, teams deploying independently, and traffic that spikes hard enough to need auto-scaling. The service meshes, secrets management, and declarative infrastructure are well-designed for that world.
Internal load balancers already handle service communication for my projects, and adding capacity means adding servers to the backend. I looked into Kubernetes once out of professional curiosity and found myself learning pods, services, deployments, ingresses, persistent volumes, config maps, secrets, Helm charts, and Kustomize overlays before I could deploy anything. All well-designed for what Kubernetes does, but a lot of machinery when simpler infrastructure already handles the routing and failover.
Matching tools to problems
There’s an implicit progression in the industry where rsync is “beginner,” Docker is “intermediate,” and Kubernetes is “advanced,” as if they’re levels on a skill tree. I don’t think that’s right. They solve different problems at different scales, and picking the one that fits your situation is the whole point.
I’m spending my time on the product instead of the infrastructure, and I’ll add complexity when something actually needs it.
I wrote a book about this. Own Your Stack: PHP for Small Teams covers deployment, server setup, CI/CD, monitoring, and backups for small teams that don’t need the complexity.