enumerator.dev

How to Migrate Data From One Postgres Instance to Another

Cassia Scheffer

Published on July 03, 2025

My first version of willow.camp ran a Postgres instance on the same container as the app. Before going to production, I had to migrate the data from the local Postgres instance to the managed instance.

I had a whopping 50-some records in the database. So, of course, could have used the willow_camp_cli to migrate them, but I wanted to do it the hard way. Because sometimes the hard way is fun. At least, it’s fun when it’s much lower stakes than working on a large production app.

Here is how I migrated the data from one database to another using an SSH tunnel, pg_dump and pg_restore.

Maintenance Mode

I had all of two real-life users when I did this. So I didn’t need maintenance mode, but I went ahead and used it anyway. hatchbox has a maintenance mode toggle in the UI that I turned on.

Set Up a New User

Digital Ocean sets up a managed database with a root user. I wanted to create an app-specific user with access to the necessary databases.

I connected to the managed database using psql and created a user, granting the necessary permissions.

PGPASSWORD=CLUSTER_PASSWORD psql \
  -U cluster_admin \
  -h digitaloceanhost.com \
  -p 25060 \
  -d defaultdb
CREATE USER "willow-camp" WITH PASSWORD 'NEW_PASSWORD';

-- Grant connect permission to the database
GRANT CONNECT ON DATABASE defaultdb TO "willow-camp";

-- Grant usage on the schema
GRANT USAGE ON SCHEMA public TO "willow-camp";

-- Grant table permissions (choose what you need)
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO "willow-camp";

-- Grant permissions on sequences (for auto-increment/serial columns)
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO "willow-camp";

-- Grant permissions on future tables (optional)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO "willow-camp";
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT USAGE, SELECT ON SEQUENCES TO "willow-camp";

Export Data from the Old Database

To do this, I created an SSH tunnel to the hatchbox instance that hosted my app and database. Then, I used pg_dump to export the data. -L in the command below sets up port forwarding so that my local port 5432 points to the hatchbox server where Postgres is running.

ssh -L 5432:localhost:5432 [email protected]

Then I created a pg_dump on my local machine through the tunnel. Note that localhost:5432 now points through the tunnel to the hatchbox database.

pg_dump -Fc "postgresql://app_user:PASSWORD@localhost:5432/myapp_production" > myapp_primary_20250625_185627.dump

Import Data to the New Database

I made a common mistake here. A pg_dump file includes statements like ALTER TABLE public.ar_internal_metadata OWNER TO user_a1b2c3d4e5f6;, which assigns ownership of a table to a user, but this user only existed in my hatchbox database.

I worked around ownership issues by using the --no-owner and --no-priviliges options.

PGPASSWORD=CLUSTER_PASSWORD pg_restore \
  -U cluster_admin \
  -h digitaloceanhost.com \
  -p 25060 \
  -d defaultdb \
  --no-owner \
  --no-privileges \
  -v \
  myapp_primary_20250625_185627.dump

Verify the Data

Once pg_restore had done its job, I connected to the new database again using the psql command above and issued a few queries to check the data.

SELECT relname as table_name, n_live_tup as current_rows FROM pg_stat_user_tables ORDER BY current_rows DESC;

Connect the App to the New Database

Finally, I updated the database environment variables in hatchbox, which rebooted the app. I performed a quick rails console check to ensure the database was connected, then I took the app out of maintenance mode.

Once the app was back up, I could see queries hitting the managed instance in the DigitalOcean UI, and I tore down the old database on my hatchbox server.