Your Staging Server Is a Directory

A lot of small teams skip staging entirely because they think it means duplicating their infrastructure. Another VPS, another database server, another monthly bill. So they test locally, deploy to production, and hope for the best. When something breaks, users find it first.

On a typical VPS running Nginx, staging is a second directory, a second database, and a second vhost. Fifteen minutes of setup, and you have a place to verify deploys before they hit production.

The setup

You already have your production app at /var/www/myapp/. Create a staging directory next to it:

/var/www/
    myapp/           # Production
    myapp-staging/   # Staging

Create a staging database:

CREATE DATABASE myapp_staging CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
GRANT ALL PRIVILEGES ON myapp_staging.* TO 'myapp_staging'@'localhost';
FLUSH PRIVILEGES;

Add an Nginx vhost at /etc/nginx/sites-available/myapp-staging:

server {
    listen 443 ssl;
    http2 on;
    server_name staging.myapp.example.com;

    ssl_certificate /etc/letsencrypt/live/staging.myapp.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/staging.myapp.example.com/privkey.pem;

    root /var/www/myapp-staging/public;
    index index.php;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        fastcgi_pass unix:/var/run/php/php8.3-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
        include fastcgi_params;
    }
}

Enable it, get an SSL cert, reload Nginx:

sudo ln -s /etc/nginx/sites-available/myapp-staging /etc/nginx/sites-enabled/
sudo certbot --nginx -d staging.myapp.example.com
sudo nginx -t && sudo systemctl reload nginx

Point staging.myapp.example.com at your server’s IP in DNS, and the infrastructure work is done.

The workflow

Keep a .env.staging file on your workstation. Same keys as production, different values: staging database, APP_DEBUG=true, MAIL_DRIVER=log so you don’t accidentally email real users.

Deploy to staging the same way you deploy to production. If you use an rsync script, make a copy that points at the staging directory:

rsync -avz --delete --exclude-from="rsync-excludes.txt" ./ myserver:/var/www/myapp-staging/
rsync .env.staging myserver:/var/www/myapp-staging/.env

Deploy to staging, test in the browser, then deploy to production. Same script, different target. The deploy process itself gets tested twice.

The hard rule

Staging database must never touch production data. Use a sanitized copy or seed data. If you import a production dump for realistic testing, strip email addresses and passwords first. Getting this wrong means staging sends real users a password reset email, or worse.

Set MAIL_DRIVER=log in your staging .env as a safety net. Even if application code tries to send email, it writes to a log file instead.

What this doesn’t catch

Staging on the same box won’t catch problems that only appear on different hardware, a different kernel, or under production load. If your server has 4 GB of RAM and production uses 3 GB, staging is competing for the remaining 1 GB. For most small PHP applications this doesn’t matter in practice, because the staging deployment sees one user (you) running through a checklist, not hundreds of concurrent requests.

If you eventually outgrow this, you’ll know: staging will start feeling slow, or you’ll hit a bug that only reproduces under load. At that point, a second server makes sense. Until then, a directory and a vhost give you what matters most: code tested in a server environment before users see it.

I wrote a book about this. Own Your Stack: PHP for Small Teams covers staging environments, deployment scripts, and the full workflow from local development to production for small teams.