Skip to content

Commit

Permalink
Merge pull request #28 from sethgoldin/2.0
Browse files Browse the repository at this point in the history
2.0
  • Loading branch information
sethgoldin authored Dec 26, 2022
2 parents 3fd1e9b + 34610d4 commit 72cf5b0
Show file tree
Hide file tree
Showing 5 changed files with 106 additions and 129 deletions.
102 changes: 43 additions & 59 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,20 @@
# DaVinci Resolve PostgreSQL Workflow Tools
## Effortlessly set up automatic backups and automatic optimizations of DaVinci Resolve Studio's PostgreSQL databases
## Effortlessly set up automatic backups and automatic optimizations of DaVinci Resolve's PostgreSQL databases

Here are some workflow tools designed for **macOS** or **Linux** systems that are running as PostgreSQL servers for DaVinci Resolve Studio.
Here are some workflow tools designed for **macOS** or **Linux** systems that are running as PostgreSQL servers for DaVinci Resolve.

This repository includes:
* For macOS:
* A `bash` script that will let you effortlessly create, load, and start `launchd` user agents that will automatically backup and automatically optimize your PostgreSQL databases
* For macOS Ventura:
* A `bash` script that will let you effortlessly create, load, and start `launchd` daemons that will automatically backup and automatically optimize your PostgreSQL 13 databases
* A `bash` script to *uninstall* the above tools
* For CentOS Linux:
* A `bash` script for CentOS Linux that will let you effortlessly create and start `systemd` units and timers that will automatically backup and automatically optimize your PostgreSQL databases
* For Red Hat Enterprise Linux 9:
* A `bash` script that will let you effortlessly create and start `systemd` units and timers that will automatically backup and automatically optimize your PostgreSQL 13 databases
* A `bash` script to *uninstall* the above tools

## How to use on macOS
1. Download the repository `davinci-resolve-postgresql-workflow-tools-master` to your `~/Downloads` folder.
2. In Terminal, execute the following command to run the script:
Download the `macos-install.sh` file and execute the script with `sudo` permissions:
```
% ~/Downloads/davinci-resolve-postgresql-workflow-tools-master/macos-install.sh
sudo sh macos-install.sh
```

The script will then:
Expand All @@ -26,18 +25,17 @@ The script will then:

Once you run through this script, you will be automatically backing up and optimizing your database according to whatever parameters you entered.

The script creates macOS `launchd` user agents, so these automatic backups and automatic database optimizations will continue on schedule, even after the system is rebooted. It's neither necessary nor desirable to run the script more than once per individual Resolve database.
The script creates macOS `launchd` daemons, so these automatic backups and automatic database optimizations will continue on schedule, even after the system is rebooted. It's neither necessary nor desirable to run the script more than once per individual Resolve database.

To verify that everything is in working order, you can periodically check the log files located in `~/DaVinci-Resolve-PostgreSQL-Workflow-Tools/logs`.
To verify that everything is in working order, you can periodically check the log files located in `/Users/Shared/DaVinci-Resolve-PostgreSQL-Workflow-Tools/logs/`.

### `zsh` vs. `bash`
Starting in macOS Catalina, the default shell is `zsh`. However, these scripts' shebangs still specify the use of `bash`, which is still included in Catalina. The scripts do not use any incompatible word splitting or array indices, so the scripts should be easily converted to native `zsh` in future releases of macOS. For more information, see [Scripting OS X](https://scriptingosx.com/zsh/).
macOS Ventura's default shell is `zsh`. However, these scripts' shebangs still specify the use of `bash`, which has still been included since the switch from back in macOS Catalina. The scripts do not use any incompatible word splitting or array indices, so the scripts should be easily converted to native `zsh` in future releases of macOS. For more information, see [Scripting OS X](https://scriptingosx.com/zsh/).

## How to use on CentOS
1. From an admin user account [neither `root` nor `postgres`], download the repository `davinci-resolve-postgresql-workflow-tools-master` to your `~/Downloads` folder.
2. In Terminal, from within your `~/Downloads/davinci-resolve-postgresql-workflow-tools-master` folder, execute the script:
## How to use on Red Hat Enterprise Linux
From an administrative user account, download the `enterprise-linux-install.sh` file and then execute the script:
```
$ sudo ./centos-install.sh
sudo sh enterprise-linux-install.sh
```

The script will then:
Expand All @@ -57,35 +55,37 @@ To verify that everything is in working order, you can periodically check the lo

## System requirements

This script has been tested and works for PostgreSQL servers for:
- DaVinci Resolve Studio 14
- DaVinci Resolve Studio 15
- DaVinci Resolve Studio 16
This script has been tested and works for PostgreSQL 13 servers for:
- DaVinci Resolve 18

### macOS

* macOS Sierra 10.12.6 or later
* PostgreSQL 9.5.4 or later (as provided by the DaVinci Resolve Studio installer)
* macOS Ventura
* EnterpriseDB PostgreSQL 13, as included from Blackmagic Design's DaVinci Resolve Project Server app

### CentOS
### Red Hat Enterprise Linux

* CentOS 7.3 or later
* PostgreSQL 9.5.4 or later
* Red Hat Enterprise Linux 9
* PostgreSQL 13 from [RHEL's included DNF repository](https://www.postgresql.org/download/linux/redhat/)

## DaVinci Resolve 18 terminology for "Project Library"

Beginning with DaVinci Resolve 18, the Project Manager window, as well as the Project Server GUI app refer to "project libraries." These are just individual PostgreSQL databases, referred to in previous versions of DaVinci Resolve as "databases." These scripts refer to the names of "databases" you want to back up and optimize.

## Background

Jathavan Sriram [wrote a great article back in 2014](https://web.archive.org/web/20141204010929/http://jathavansriram.github.io/2014/04/20/davinci-resolve-how-to-backup-optimize/) about how to use pgAdmin III tools in `bash`, instead of having to use the `psql` shell.

The core insights from his 2014 article still apply, but several crucial changes need to be made for modern systems:
1. Apple [deprecated `cron` in favor of `launchd`](https://developer.apple.com/library/content/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/ScheduledJobs.html).
2. Starting with DaVinci Resolve 12.5.4 on macOS, DaVinci Resolve has been using PostgreSQL 9.5.
3. The locations of `reindexdb` and `vacuumdb` in PostgreSQL 9.5.4 have changed from what they were in PostgreSQL 8.4.
2. From DaVinci Resolve 12.5.4 through 17, DaVinci Resolve used PostgreSQL 9.5. For DaVinci Resolve 18 an onward, PostgreSQL 13 is recommended.
3. The locations of the `pg_dump`, `reindexdb`, and `vacuumdb` binaries in PostgreSQL 13 are different from what they were in PostgreSQL 8.4 and 9.5.

## What this script does

On macOS, this script creates and installs `bash` scripts and `launchd` agents that, together, regularly and automatically backup and optimize the PostgreSQL databases that DaVinci Resolve Studio uses.
On macOS, this script creates and installs `bash` scripts and `launchd` daemons that, together, regularly and automatically backup and optimize the PostgreSQL databases that DaVinci Resolve uses.

On CentOS Linux, this script creates and installs `bash` scripts, `systemd` units, and `systemd` timers that, together, regularly and automatically backup and optimize the PostgreSQL databases that DaVinci Resolve Studio uses. After a reboot, each `systemd` timer will be delayed by a random number of seconds, up to 180 seconds, so as to stagger the database utilities for optimal performance.
On Red Hat Enterprise Linux, this script creates and installs `bash` scripts, `systemd` units, and `systemd` timers that, together, regularly and automatically backup and optimize the PostgreSQL databases that DaVinci Resolve uses. After a reboot, each `systemd` timer will be delayed by a random number of seconds, up to 180 seconds, so as to stagger the database utilities for optimal performance.

## Configuration

Expand All @@ -97,35 +97,19 @@ Make sure that you create the directory where your backups are going to go *befo

If you have any spaces in the full path of the directory where your backups are going, be sure to escape them with `\` when you run the script.

When behind a properly configured network-wide firewall, `pg_hba.conf` file should be configured so that that these three lines use the `trust` method of authentication:
```
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
```

N.B. Running the GUI app **DaVinci Resolve Project Server** somehow seems to change the authentication method back to `md5`. The scripts might continue to run, but because they'll be throwing errors, the logging won't be accurate. As a workaround, *don't open this GUI app,* or you'll have to go back to the `pg_hba.conf` file and manually change these lines back to `trust` again.
The script can be run from any admin user so long as it's run with `sudo` so as to have `root` user permissions.

The script should be run from a regular user account with admin privileges. Do not run this script from either the `root` or `postgres` user accounts.
Because the script generates `launchd` daemons, the backups and optimizations will occur if the machine is running, even without any user being logged in.

Because the script generates `launchd` user agents, the backups and optimizations will only occur while logged into the same account from which the script was run. Stay logged into the same account.

### CentOS
### Red Hat Enterprise Linux

The `.pgpass` file that the script creates assumes that the password for your PostgreSQL database is `DaVinci`, which is a convention from Blackmagic Design.

Make sure that you create the directory where your backups are going to go *before* running the script.

Be sure to use the absolute path for the directory into which the backups will go.

When behind a properly configured network-wide firewall, the `pg_hba.conf` file should be configured so that that these three lines use the `trust` method of authentication:
```
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
```

The script should be run from a regular user account with admin privileges. Do not run this script from either the `root` or `postgres` user accounts.
The script can be run from any admin user so long as it's run with `sudo` so as to have `root` user permissions.

## Restoring from backup

Expand All @@ -135,16 +119,16 @@ In the event of a disk failure hosting the PostgreSQL database, the procedure to
1. Set up a new, totally fresh PostgreSQL server
2. Create a fresh PostgreSQL database on the server, naming your database whatever you want it to be named
1. If the version of Resolve you're using is the same version you were using when the `*.backup` file was created, you can just connect your client workstation and create a new blank database via the GUI;
2. But if your `*.backup` file was created for some earlier version of Resolve, you'll need to hop into the `postgres` superuser account and create a _completely blank_ database:
2. But if your `*.backup` file was created for some earlier version of Resolve, you'll need to become the `postgres` user with `root` permissions and create a _completely blank_ database:
```
$ sudo su - postgres
$ createdb <newdatabasename>
```
3. From a normal user acccount on the PostgreSQL server [not `root` nor `postgres`], run the command:
3. Run the command:
```
$ pg_restore --host localhost --username postgres --single-transaction --clean --if-exists --dbname=<dbname> <full path to your backup file>
$ pg_restore --host localhost --username postgres --password --single-transaction --clean --if-exists --dbname=<dbname> <full path to your backup file>
```
You might see some error messages when you run the `pg_restore` command, but they are harmless, [according to the PostgreSQL documentation](https://www.postgresql.org/docs/9.5/static/app-pgrestore.html).
You'll need to enter the password for the `postgres` user. This is the password for the PostgreSQL database user `postgres`, not the OS user.

4. If the version of Resolve you're using is the same version you were using when the `*.backup` file was created, you should be good to go, but if your `*.backup` file was created for some earlier version of Resolve, you should now be able to connect the the database via the GUI on the client and then upgrade it for your current version.

Expand All @@ -155,17 +139,17 @@ In the event of a disk failure hosting the PostgreSQL database, the procedure to
If you wish to stop automatically backing up and optimizing a particular database, you can run `macos-uninstall.sh`:

```
% sudo ./macos-uninstall.sh
sudo sh macos-uninstall.sh
```

The script will ask you what database you want to stop backing up and optimizing. The database you specify will then stop being backed up, stop being optimized, and all relevant files will be safely and cleanly removed from your system. The database itself will remain untouched.
The script will ask you what database you want to stop backing up and optimizing. The database you specify will then stop being backed up, stop being optimized, and all relevant files will be safely and cleanly removed from your system. The database itself, as well as the backup files that have already been generated, will remain untouched.

### Uninstall on CentOS
### Uninstall on Red Hat Enterprise Linux

If you wish to stop automatically backing up and optimizing a particular database, you can run `centos-uninstall.sh`:
If you wish to stop automatically backing up and optimizing a particular database, you can run `enterprise-linux-uninstall.sh`:

```
$ sudo ./centos-uninstall.sh
sudo sh enterprise-linux-uninstall.sh
```

The script will ask you what database you want to stop backing up and optimizing. The database you specify will then stop being backed up, stop being optimized, and all relevant files will be safely and cleanly removed from your system. The database itself will remain untouched.
The script will ask you what database you want to stop backing up and optimizing. The database you specify will then stop being backed up, stop being optimized, and all relevant files will be safely and cleanly removed from your system. The database itself, as well as the backup files that have already been generated, will remain untouched.
22 changes: 9 additions & 13 deletions centos-install.sh → enterprise-linux-install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -69,25 +69,25 @@ touch /usr/local/DaVinci-Resolve-PostgreSQL-Workflow-Tools/backup/backup-"$dbnam
cat << EOF > /usr/local/DaVinci-Resolve-PostgreSQL-Workflow-Tools/backup/backup-"$dbname".sh
#!/bin/bash
# Let's perform the backup and log to the monthly log file if the backup is successful.
/usr/pgsql-9.5/bin/pg_dump --host localhost --username postgres $dbname --blobs --file $backupDirectory/${dbname}_\$(date "+%Y_%m_%d_%H_%M").backup --format=custom --verbose --no-password && \\
/usr/pgsql-13/bin/pg_dump --host localhost --username postgres $dbname --blobs --file $backupDirectory/${dbname}_\$(date "+%Y_%m_%d_%H_%M").backup --format=custom --verbose --no-password && \\
echo "${dbname} was backed up at \$(date "+%Y_%m_%d_%H_%M") into \"${backupDirectory}\"." >> /usr/local/DaVinci-Resolve-PostgreSQL-Workflow-Tools/logs/logs-\$(date "+%Y_%m").log
EOF

# To make sure that this backup script will run without a password, we need to add a .pgpass file to ~ if it doesn't already exist:
if [ ! -f $HOME/.pgpass ]; then
touch $HOME/.pgpass
echo "localhost:5432:*:postgres:DaVinci" > $HOME/.pgpass
if [ ! -f /root/.pgpass ]; then
touch /root/.pgpass
echo "localhost:5432:*:postgres:DaVinci" > /root/.pgpass
# We also need to make sure that that .pgpass file has the correct permissions of 0600:
chmod 0600 $HOME/.pgpass
chmod 0600 /root/.pgpass
fi

# Let's move onto the "optimize" script:
touch /usr/local/DaVinci-Resolve-PostgreSQL-Workflow-Tools/optimize/optimize-"$dbname".sh
cat << EOF > /usr/local/DaVinci-Resolve-PostgreSQL-Workflow-Tools/optimize/optimize-"$dbname".sh
#!/bin/bash
# Let's optimize the database and log to the monthly log file if the optimization is successful.
/usr/pgsql-9.5/bin/reindexdb --host localhost --username postgres $dbname --no-password --echo && \\
/usr/pgsql-9.5/bin/vacuumdb --analyze --host localhost --username postgres $dbname --verbose --no-password && \\
/usr/pgsql-13/bin/reindexdb --host localhost --username postgres $dbname --no-password --echo && \\
/usr/pgsql-13/bin/vacuumdb --analyze --host localhost --username postgres $dbname --verbose --no-password && \\
echo "${dbname} was optimized at \$(date "+%Y_%m_%d_%H_%M")." >> /usr/local/DaVinci-Resolve-PostgreSQL-Workflow-Tools/logs/logs-\$(date "+%Y_%m").log
EOF

Expand Down Expand Up @@ -159,12 +159,8 @@ chmod 755 /etc/systemd/system/optimize-"$dbname".timer
# All we need to do is enable and start the timers.

systemctl daemon-reload
systemctl enable backup-"$dbname".timer
systemctl enable optimize-"$dbname".timer
systemctl start backup-"$dbname".timer
systemctl start optimize-"$dbname".timer

# By the way, CentOS 7.4 is still shipping with systemd 219. systemd 220 introduced the "--now" flag, so once CentOS actually ships systemd 220 or later, this code should be revised with single a "systemctl --now enable" command, instead of separate "enable" and "start" commands.
systemctl enable --now backup-"$dbname".timer
systemctl enable --now optimize-"$dbname".timer

echo "Congratulations, $dbname will be backed up every "$backupFrequency" and optimized every "$optimizeFrequency"."
echo "You can check to make sure that everything is being backed up and optimized properly by periodically looking at the log files in: /usr/local/DaVinci-Resolve-PostgreSQL-Workflow-Tools/logs"
Expand Down
File renamed without changes.
Loading

0 comments on commit 72cf5b0

Please sign in to comment.