# Backup and Restore

Since we're using PostgreSQL as DBMS, we will use the native export utility of that engine, in order to export both schema and data of a source installation of Hume.Labs.

# Postgres DB backup

If not already installed, on your local machine, download a package which contains the pg_dump utility (according to your OS, you can choose the proper version here (opens new window) ). You can also find the proper installer for your OS here (opens new window).

Once you have a PostgreSQL installation on your machine, you can locally run the following statement against the source database.

    pg_dump.exe --file=<path-and-name-of-dump-file.sql> --create --username=<username> --host=<dns-name-or-ip> --port=<pg-port-default-5432> <database-name>

For example, if we need to back up the Hume DB from the staging environment, we will run:

    pg_dump.exe --file=/home/delo4/Desktop/hume_dump.sql --create --username=mgulttxkkogayd --host=ec2-54-217-236-201.eu-west-1.compute.amazonaws.com --port=5432 d4t2s87955la21

of course, insert the password when required

# Postgres DB restore

Since the AWS username makes sense only in Amazon's environment, feel free to replace it using a simple find-and-replace against the hume_dump.sql file.

We suggest to use the following configuration

  • username: hume
  • password: your password
  • database name: hume

In order to start from a clean state, we have to distinguish between two cases, according whether the target DB is hosted in a dockerized environment or not:

  • Target DB in a NON-DOCKERIZED environment: In that case you have to drop and create manually the schema of the Hume DB: for instance, if the database user still doesn't exist, you can create it manually as described in the official documentation (here (opens new window)).

  • Target DB DOCKERIZED: if the container already exists, the fastest way is deleting and recreating it through *docker-compose.

    • Let's move to the docker-compose.yml directory.

    • Delete the existing container (if any) together with its volume(s) (make sure the postgres data volue got delete, if not, do it manually)

      docker-compose rm -svf postgres

    • Create the container again docker-compose up -d postgres

Now, in both cases, you can import the dump file into the target instance (in the snippet below it's supposed to be localhost, because of the missing --hostname parameter):

   psql --username=<username> --dbname=<database-name> --file=<path-and-name-of-dump-file.sql>

For example, if we follow the naming suggestion above mentioned:

   psql --username=hume --dbname=hume --file=/home/delo4/Desktop/hume_dump.sql

Note that the we didn't specify the port (--port) and the hostname (--host) which are, by default, 5432 and localhost, respectively.

In case you run a dockerised Hume, the restore command would be:

docker exec -i <your-postgres-volume> /bin/bash -c "password=<postgres-password> psql --username hume hume" < backup.sql

Note: this restore process should be done while the API microservice is not running.