Your app crashed because PostgreSQL ran out of connections. Your database is accepting connections from the internet. Time to fix this properly with PostgreSQL and Docker Compose.
Most developers use Docker Compose for local development and production. If youâre managing a bare metal installation, the concepts still apply, just swap the Docker commands for systemd commands but for this one weâll focus on Postgres Setup with Docker.
Why Docker Compose
Weâll use PostgreSQL 18 (latest stable version) with Docker Compose because:
- Consistent environment across development and production
- Easy version upgrades with minimal downtime
- Isolated networking keeps your database secure by default
- Simple backup and recovery with volume management
- Resource control with Dockerâs built-in limits
Docker Compose Setup
Basic Production Configuration
Create docker-compose.yml:
services: postgres: image: postgres:18.0 container_name: myapp_postgres restart: always environment: POSTGRES_DB: myapp POSTGRES_USER: myapp POSTGRES_PASSWORD: your-strong-password-here volumes: - postgres_data:/var/lib/postgresql/data - ./postgres-config/postgresql.conf:/etc/postgresql/postgresql.conf - ./postgres-config/pg_hba.conf:/var/lib/postgresql/data/pg_hba.conf networks: - app_network command: postgres -c config_file=/etc/postgresql/postgresql.conf healthcheck: test: ["CMD", "pg_isready", "-U", "myapp"] interval: 30s timeout: 10s retries: 3 start_period: 10s
volumes: postgres_data:
networks: app_network: driver: bridgeNotice that we are not exposing the port to the internet. This is because our app container should be running on the same network and it can connect to the database using the container name, so we donât need to map the port to the host at all.
But for some reason, if you have to map it to the host, you can do it like this:
ports: - '127.0.0.1:5432:5432'Make sure to add 127.0.0.1 so that its only accessible from the local network.
Essential Configuration Files
Create postgres-config/postgresql.conf:
# Connection settingslisten_addresses = '*'port = 5432max_connections = 100 # Adjust based on your VPS
# Memory settings (for Hetzner CAX11 - 4GB RAM)shared_buffers = 1GB # 25% of RAMeffective_cache_size = 3GB # 75% of RAMwork_mem = 10MBmaintenance_work_mem = 256MBcheckpoint_completion_target = 0.9
# WAL settingswal_buffers = 16MBwal_level = replicamax_wal_size = 2GBmin_wal_size = 80MB
# Performance settingsrandom_page_cost = 1.1 # For SSD storageeffective_io_concurrency = 200 # For SSD storage
# Logginglog_destination = 'stderr'logging_collector = onlog_directory = 'log'log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'log_min_error_statement = errorlog_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h '
# Security - SSL disabled for internal Docker networkssl = offNetwork Security
Application Integration
Your app container configuration:
services: app: image: your-app:latest depends_on: postgres: condition: service_healthy environment: DATABASE_URL: 'postgresql://myapp:your-strong-password-here@postgres:5432/myapp' networks: - app_network
postgres: # ... previous postgres configurationVPS-Specific Optimizations
Memory Settings by VPS Tier
Hetzner CAX11 (4GB RAM):
max_connections = 100shared_buffers = 1GBeffective_cache_size = 3GBwork_mem = 8MBHetzner CAX21 (8GB RAM):
max_connections = 200shared_buffers = 2GBeffective_cache_size = 6GBwork_mem = 16MBHetzner CAX31 (16GB RAM):
max_connections = 400shared_buffers = 4GBeffective_cache_size = 12GBwork_mem = 32MBFirewall Configuration
PostgreSQL is not exposed outside Docker, so no additional firewall rules needed. Docker handles internal networking automatically.
Death is hard enough. Accessing accounts shouldn't be.
When someone dies, you don't get even one extra second to access the documents and information they meant to share it with you. Trying to fix this problem with Eternal Vault.
Deployment and Management
Starting Your Database
# Start the stackdocker compose up -d
# Check statusdocker compose ps
# View logsdocker compose logs postgres
# Connect to databasedocker compose exec postgres psql -U myapp -d myappDatabase Maintenance
Create a maintenance user with limited privileges:
-- Connect as superuserCREATE USER maintenance_user WITH PASSWORD 'maintenance-password';GRANT MAINTAIN ON ALL TABLES IN SCHEMA public TO maintenance_user;
-- Now maintenance_user can run VACUUM, ANALYZE, REINDEX without superuser privilegesMonitoring Database Health
Create monitoring script check_postgres.sh to check the health of the database:
#!/bin/bash
# Check connectiondocker compose exec -T postgres pg_isready -U myapp
# Check connection count vs limitdocker compose exec -T postgres psql -U myapp -d myapp -c "SELECT (SELECT COUNT(*) FROM pg_stat_activity) as current_connections, (SELECT setting::int FROM pg_settings WHERE name='max_connections') as max_connections, ROUND(100.0 * (SELECT COUNT(*) FROM pg_stat_activity) / (SELECT setting::int FROM pg_settings WHERE name='max_connections'), 2) as percentage;"
# Check database sizesdocker compose exec -T postgres psql -U myapp -d myapp -c "SELECT pg_database.datname as database_name, pg_size_pretty(pg_database_size(pg_database.datname)) as sizeFROM pg_databaseORDER BY pg_database_size(pg_database.datname) DESC;"Backup Integration
Basic backup setup, you can use any other backup tool you prefer:
#!/bin/bash
BACKUP_DIR="/var/backups/postgres"DATE=$(date +%Y%m%d_%H%M%S)
mkdir -p $BACKUP_DIR
# Create backupdocker compose exec -T postgres pg_dump -U myapp myapp | gzip > "$BACKUP_DIR/myapp_$DATE.sql.gz"
# Keep only last 30 daysfind $BACKUP_DIR -name "myapp_*.sql.gz" -mtime +30 -delete
echo "Backup completed: myapp_$DATE.sql.gz"For complete backup strategies including offsite storage, see our PostgreSQL backup guide and cloud backup setup.
What Could Go Wrong
Connection Issues
Container wonât start:
# Check logsdocker compose logs postgres
# Common issues:# - Wrong file permissions on SSL certificates# - Invalid postgresql.conf syntax# - Port already in useCanât connect from application:
# Test connectiondocker compose exec postgres psql -U myapp -d myapp
# Check network connectivitydocker compose exec app ping postgres
# Verify pg_hba.conf allows your app's networkPerformance Problems
Too many connections:
# Kill idle connectionsdocker compose exec postgres psql -U myapp -d myapp -c "SELECT pg_terminate_backend(pid)FROM pg_stat_activityWHERE state = 'idle' AND state_change < NOW() - INTERVAL '1 hour';"High memory usage:
# Check PostgreSQL memory usagedocker stats myapp_postgres
# Tune shared_buffers if container uses too much memoryVersion Upgrades
Upgrading PostgreSQL:
# Pull new imagedocker compose pull postgres
# Restart with new versiondocker compose up -d postgresMajor version upgrades (like 17 to 18) require data migration:
# Always backup first!./backup_postgres.sh
# Test upgrade in development first# Swap the image version in docker-compose.yml# Restart the containerdocker compose up -d postgresAlternative: Bare Metal Installation
If you prefer bare metal PostgreSQL:
# Ubuntu/Debian - PostgreSQL 18sudo apt updatesudo apt install -y postgresql-commonsudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.shsudo apt install -y postgresql-18 postgresql-contrib-18
# Configure same settings in /etc/postgresql/18/main/postgresql.conf# And /etc/postgresql/18/main/pg_hba.confReality Check
Docker Compose setup: 15-30 minutes first time, 5 minutes once you have the configuration files.
PostgreSQL with proper configuration handles significant traffic on modest hardware. A Hetzner CAX21 ($6.99) with 8GB RAM easily supports hundreds of concurrent users with proper connection pooling.
Most production issues come from:
- Wrong memory settings for your VPS tier
- No connection limits leading to resource exhaustion
- Missing SSL configuration
- No backup strategy
Get your VPS secured first, then monitor resources as your database grows. Docker makes this much easier to manage than bare metal installations.