Back to all posts

Bulletproof Your Dokploy Analytics: Free Cloud Backups for Self-Hosted Umami

April 11, 2026 - 5 min read - Raymond

umamiself-hostedCloudBackup
Bulletproof Your Dokploy Analytics: Free Cloud Backups for Self-Hosted Umami

Self-hosting your analytics with Umami is a brilliant move. You own your data, you respect your users' privacy, and you aren't feeding the big-tech tracking machine.

But self-hosting comes with one massive, anxiety-inducing risk: Data Loss. If your server crashes or your hosting provider has an outage, years of visitor data can vanish in an instant.

If you are using Dokploy to manage your server, you actually have an enterprise-grade backup system already built into your dashboard. The problem? Most people trap their Umami database inside their docker-compose.yml file, which makes it completely invisible to Dokploy’s automated backup engine.

In this guide, we are going to fix that. We will pull the database out, let Dokploy manage it, and set up a "Set-It-and-Forget-It" system that zips your data and sends it to a secure cloud vault every night—all for $0 a month.


The Tech Stack

  • Umami: The analytics dashboard.

  • Dokploy: Our server manager.

  • Cloudflare R2: Our free cloud storage vault (10GB free tier, zero egress fees).


Step 1: Create Your Free "Cloud Vault"

First, we need a secure, off-site location to send the data. Cloudflare R2 is an S3-compatible storage service that is incredibly fast and free for our needs.

  1. Log into your Cloudflare dashboard and navigate to Storage & Databases > R2.

  2. Click Create Bucket. Give it a recognizable name, like umami-backups.

  3. On the main R2 page, click Manage R2 API Tokens > Create API Token.

    • Permissions: Select Object Read & Write (Crucial: Dokploy needs permission to upload files!).

    • TTL: Select Forever.

  4. Copy your keys. Cloudflare will display an Access Key ID, a Secret Access Key, and a Jurisdiction-specific endpoint.

🚨 Pro Tip: Save that Secret Key in a password manager immediately. Cloudflare will only show it to you this one time!

Step 2: Connect the Vault to Dokploy

Now we tell our Dokploy server exactly where to find the vault.

  1. Open your Dokploy dashboard and go to Settings > Destinations.

  2. Click Add Destination and choose S3.

  3. Fill in your vault details:

    • Name: Cloudflare R2

    • Endpoint: Paste the endpoint URL from Step 1 (e.g., https://<ACCOUNT_ID>.r2.cloudflarestorage.com).

    • Region: Leave blank or type auto (or us-east-1 if your version of Dokploy requires it).

    • Bucket: umami-backups.

    • Credentials: Paste your Access Key and Secret Key.

Step 3: Create a "Managed" Database

This is the secret sauce. Instead of running Postgres inside your Umami Compose file, we create it as a standalone service. This allows Dokploy to "see" it and back it up.

  1. In your Dokploy Project, click Create Service > Database > PostgreSQL.

  2. Name it umami-db and click deploy.

  3. Go to the Environment tab of your new database. Look for POSTGRES_DB (usually postgres) and click the eye icon next to POSTGRES_PASSWORD to reveal your auto-generated password. Keep these handy.

  4. Turn on the autopilot: Go to the Backups tab of your database.

    • Click Add Backup.

    • Destination: Select the Cloudflare R2 you created in Step 2.

    • Database Name: Type whatever was in the POSTGRES_DB field (e.g., postgres).

    • Prefix Destination: Leave this completely empty.

    • Schedule: 0 0 * * * (This cron job means "run at midnight every night").

Step 4: Wire Umami to the Safe House

Now that we have a secure, backed-up database running, we just need to point Umami to it.

Go to your Umami App service in Dokploy and update your Docker Compose file. You can delete any old db: or volumes: sections you had previously. Your final, lightweight config should look exactly like this:

services:
  umami:
    image: ghcr.io/umami-software/umami:3.0.3
    restart: always
    environment:
      # The internal hostname (umami-db) points directly to your managed DB
      DATABASE_URL: postgresql://postgres:YOUR_PASSWORD_HERE@umami-db:5432/postgres
      DATABASE_TYPE: postgresql
      APP_SECRET: ${APP_SECRET}
    networks:
      - dokploy-network

networks:
  dokploy-network:
    external: true

Note: Replace YOUR_PASSWORD_HERE with the password you found in Step 3.

Deploy this file. Umami will connect to your new managed database over Dokploy's internal network, build its tables, and immediately start recording hits.

Step 5: The Victory Lap

You don't have to wait until midnight to sleep peacefully. Let's force a test run to prove it works.

  1. Go back to your Managed Database (umami-db) > Backups tab.

  2. Click the Run Now (Play) icon next to your Cloudflare backup.

  3. Log into your Cloudflare R2 dashboard and open your umami-backups bucket.

If you see a fresh .sql.gz file sitting there, you've won.

Conclusion

Your Dokploy analytics setup is now professional-grade. Your website hits process locally on your server for lightning-fast dashboard performance, while a compressed copy is shipped off-site to Cloudflare every night. If your server ever goes up in smoke, you just spin up a new VPS, install Dokploy, restore that .sql file, and you're right back in business. Happy tracking!