PostgreSQL on VPS - installation, configuration, security

📆 · ⏳ 5 min read · ·

Your app crashed because PostgreSQL ran out of connections. Your database is accepting connections from the internet. Time to fix this properly with PostgreSQL and Docker Compose.

Most developers use Docker Compose for local development and production. If you’re managing a bare metal installation, the concepts still apply, just swap the Docker commands for systemd commands but for this one we’ll focus on Postgres Setup with Docker.

Why Docker Compose

We’ll use PostgreSQL 18 (latest stable version) with Docker Compose because:

  • Consistent environment across development and production
  • Easy version upgrades with minimal downtime
  • Isolated networking keeps your database secure by default
  • Simple backup and recovery with volume management
  • Resource control with Docker’s built-in limits

Docker Compose Setup

Basic Production Configuration

Create docker-compose.yml:

services:
postgres:
image: postgres:18.0
container_name: myapp_postgres
restart: always
environment:
POSTGRES_DB: myapp
POSTGRES_USER: myapp
POSTGRES_PASSWORD: your-strong-password-here
volumes:
- postgres_data:/var/lib/postgresql/data
- ./postgres-config/postgresql.conf:/etc/postgresql/postgresql.conf
- ./postgres-config/pg_hba.conf:/var/lib/postgresql/data/pg_hba.conf
networks:
- app_network
command: postgres -c config_file=/etc/postgresql/postgresql.conf
healthcheck:
test: ["CMD", "pg_isready", "-U", "myapp"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
volumes:
postgres_data:
networks:
app_network:
driver: bridge

Notice that we are not exposing the port to the internet. This is because our app container should be running on the same network and it can connect to the database using the container name, so we don’t need to map the port to the host at all.

But for some reason, if you have to map it to the host, you can do it like this:

ports:
- '127.0.0.1:5432:5432'

Make sure to add 127.0.0.1 so that its only accessible from the local network.

Essential Configuration Files

Create postgres-config/postgresql.conf:

Terminal window
# Connection settings
listen_addresses = '*'
port = 5432
max_connections = 100 # Adjust based on your VPS
# Memory settings (for Hetzner CAX11 - 4GB RAM)
shared_buffers = 1GB # 25% of RAM
effective_cache_size = 3GB # 75% of RAM
work_mem = 10MB
maintenance_work_mem = 256MB
checkpoint_completion_target = 0.9
# WAL settings
wal_buffers = 16MB
wal_level = replica
max_wal_size = 2GB
min_wal_size = 80MB
# Performance settings
random_page_cost = 1.1 # For SSD storage
effective_io_concurrency = 200 # For SSD storage
# Logging
log_destination = 'stderr'
logging_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_min_error_statement = error
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h '
# Security - SSL disabled for internal Docker network
ssl = off

Network Security

Application Integration

Your app container configuration:

services:
app:
image: your-app:latest
depends_on:
postgres:
condition: service_healthy
environment:
DATABASE_URL: 'postgresql://myapp:your-strong-password-here@postgres:5432/myapp'
networks:
- app_network
postgres:
# ... previous postgres configuration

VPS-Specific Optimizations

Memory Settings by VPS Tier

Hetzner CAX11 (4GB RAM):

Terminal window
max_connections = 100
shared_buffers = 1GB
effective_cache_size = 3GB
work_mem = 8MB

Hetzner CAX21 (8GB RAM):

Terminal window
max_connections = 200
shared_buffers = 2GB
effective_cache_size = 6GB
work_mem = 16MB

Hetzner CAX31 (16GB RAM):

Terminal window
max_connections = 400
shared_buffers = 4GB
effective_cache_size = 12GB
work_mem = 32MB

Firewall Configuration

PostgreSQL is not exposed outside Docker, so no additional firewall rules needed. Docker handles internal networking automatically.

PROMOTED Built & launched by me

Death is hard enough. Accessing accounts shouldn't be.

Eternal Vault Logo

When someone dies, you don't get even one extra second to access the documents and information they meant to share it with you. Trying to fix this problem with Eternal Vault.

Protect your family now

Deployment and Management

Starting Your Database

Terminal window
# Start the stack
docker compose up -d
# Check status
docker compose ps
# View logs
docker compose logs postgres
# Connect to database
docker compose exec postgres psql -U myapp -d myapp

Database Maintenance

Create a maintenance user with limited privileges:

-- Connect as superuser
CREATE USER maintenance_user WITH PASSWORD 'maintenance-password';
GRANT MAINTAIN ON ALL TABLES IN SCHEMA public TO maintenance_user;
-- Now maintenance_user can run VACUUM, ANALYZE, REINDEX without superuser privileges

Monitoring Database Health

Create monitoring script check_postgres.sh to check the health of the database:

check_postgres.sh
#!/bin/bash
# Check connection
docker compose exec -T postgres pg_isready -U myapp
# Check connection count vs limit
docker compose exec -T postgres psql -U myapp -d myapp -c "
SELECT
(SELECT COUNT(*) FROM pg_stat_activity) as current_connections,
(SELECT setting::int FROM pg_settings WHERE name='max_connections') as max_connections,
ROUND(100.0 * (SELECT COUNT(*) FROM pg_stat_activity) /
(SELECT setting::int FROM pg_settings WHERE name='max_connections'), 2) as percentage;
"
# Check database sizes
docker compose exec -T postgres psql -U myapp -d myapp -c "
SELECT
pg_database.datname as database_name,
pg_size_pretty(pg_database_size(pg_database.datname)) as size
FROM pg_database
ORDER BY pg_database_size(pg_database.datname) DESC;
"

Backup Integration

Basic backup setup, you can use any other backup tool you prefer:

backup_postgres.sh
#!/bin/bash
BACKUP_DIR="/var/backups/postgres"
DATE=$(date +%Y%m%d_%H%M%S)
mkdir -p $BACKUP_DIR
# Create backup
docker compose exec -T postgres pg_dump -U myapp myapp | gzip > "$BACKUP_DIR/myapp_$DATE.sql.gz"
# Keep only last 30 days
find $BACKUP_DIR -name "myapp_*.sql.gz" -mtime +30 -delete
echo "Backup completed: myapp_$DATE.sql.gz"

For complete backup strategies including offsite storage, see our PostgreSQL backup guide and cloud backup setup.

What Could Go Wrong

Connection Issues

Container won’t start:

Terminal window
# Check logs
docker compose logs postgres
# Common issues:
# - Wrong file permissions on SSL certificates
# - Invalid postgresql.conf syntax
# - Port already in use

Can’t connect from application:

Terminal window
# Test connection
docker compose exec postgres psql -U myapp -d myapp
# Check network connectivity
docker compose exec app ping postgres
# Verify pg_hba.conf allows your app's network

Performance Problems

Too many connections:

Terminal window
# Kill idle connections
docker compose exec postgres psql -U myapp -d myapp -c "
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE state = 'idle' AND state_change < NOW() - INTERVAL '1 hour';
"

High memory usage:

Terminal window
# Check PostgreSQL memory usage
docker stats myapp_postgres
# Tune shared_buffers if container uses too much memory

Version Upgrades

Upgrading PostgreSQL:

Terminal window
# Pull new image
docker compose pull postgres
# Restart with new version
docker compose up -d postgres

Major version upgrades (like 17 to 18) require data migration:

Terminal window
# Always backup first!
./backup_postgres.sh
# Test upgrade in development first
# Swap the image version in docker-compose.yml
# Restart the container
docker compose up -d postgres

Alternative: Bare Metal Installation

If you prefer bare metal PostgreSQL:

Terminal window
# Ubuntu/Debian - PostgreSQL 18
sudo apt update
sudo apt install -y postgresql-common
sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh
sudo apt install -y postgresql-18 postgresql-contrib-18
# Configure same settings in /etc/postgresql/18/main/postgresql.conf
# And /etc/postgresql/18/main/pg_hba.conf

Reality Check

Docker Compose setup: 15-30 minutes first time, 5 minutes once you have the configuration files.

PostgreSQL with proper configuration handles significant traffic on modest hardware. A Hetzner CAX21 ($6.99) with 8GB RAM easily supports hundreds of concurrent users with proper connection pooling.

Most production issues come from:

  1. Wrong memory settings for your VPS tier
  2. No connection limits leading to resource exhaustion
  3. Missing SSL configuration
  4. No backup strategy

Get your VPS secured first, then monitor resources as your database grows. Docker makes this much easier to manage than bare metal installations.

You may also like

  • # devops

    Choosing the Right VPS Provider - comparing DigitalOcean, Linode, Vultr, Hetzner for different use cases

    Skip the analysis paralysis. Real performance data and pricing to help you pick between DigitalOcean, Linode, Vultr, and Hetzner based on your actual needs.

  • # devops

    VPS vs Shared Hosting vs Managed Services - When to Choose What

    Your hosting choice makes or breaks your project. Here are the real costs and decision criteria to stop second-guessing and pick the right option for your specific situation.

  • # devops

    Understanding VPS Resources - RAM, CPU, storage, bandwidth limits and monitoring

    Your app just crashed over night because you hit the 4GB RAM limit. Here's how to monitor VPS resources and avoid nasty surprises.