You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We need to get the new pipeline running on the VM. In order to do this, you'll need to update the .cagp_env file appropriately, build the new docker container, and update the CRON job to run main.py once a month. We'll also want to create monthly backups (or maybe quarterly backups?) of the database. We'll need to make sure all of our slack messages are reporting correctly for monitoring purposes, and that the data are being written to the right place in GCP to be usable by the website. Since we have the data in hypertables now, I don't think we need to be as concerned about backup up the pmtiles, but we should still due our due dilligence to make sure we're not deleting things. I've backed up the old postgres database to GCP inccase something happens.
Note: I know this ticket needs to be improved. Please throw a comment on here if you're interested and I'll come back with more context.
Acceptance Criteria
first
second
third
Additional context
Here's a shell script you modify as the basis of your pg_dump backups:
#!/bin/bash
# Set variables
TIMESTAMP=$(date +"%Y%m%d_%H%M%S") # Current timestamp
DB_NAME="vacantlotdb" # Replace with your database name
DB_USER="postgres" # Replace with your database username
DOCKER_CONTAINER="cagp-postgres" # Replace with your PostgreSQL Docker container name
BACKUP_FILE="vacantlotdb_backup_$TIMESTAMP.sql.gz"
BUCKET="gs://cleanandgreenphl/db_backups"
# Dump the database using Docker and compress the output
echo "Starting database backup..."
docker exec $DOCKER_CONTAINER pg_dump -U $DB_USER -d $DB_NAME | gzip > /tmp/$BACKUP_FILE
# Upload the backup to GCS
echo "Uploading backup to Google Cloud Storage..."
gsutil cp /tmp/$BACKUP_FILE $BUCKET
# Verify upload success
if [ $? -eq 0 ]; then
echo "Backup uploaded successfully."
# Remove local backup file to save space
rm /tmp/$BACKUP_FILE
else
echo "Failed to upload backup to GCS."
exit 1
fi
echo "Monthly database backup completed successfully."
The text was updated successfully, but these errors were encountered:
@rmartinsen if you're interested in tackling another issue, this would be a huge help (same as with #1046). I'm traveling this week and don't have bandwidth to make this happen--any chance you're interested?
Describe the task
We need to get the new pipeline running on the VM. In order to do this, you'll need to update the
.cagp_env
file appropriately, build the new docker container, and update the CRON job to runmain.py
once a month. We'll also want to create monthly backups (or maybe quarterly backups?) of the database. We'll need to make sure all of our slack messages are reporting correctly for monitoring purposes, and that the data are being written to the right place in GCP to be usable by the website. Since we have the data in hypertables now, I don't think we need to be as concerned about backup up thepmtiles,
but we should still due our due dilligence to make sure we're not deleting things. I've backed up the old postgres database to GCP inccase something happens.Note: I know this ticket needs to be improved. Please throw a comment on here if you're interested and I'll come back with more context.
Acceptance Criteria
Additional context
Here's a shell script you modify as the basis of your
pg_dump
backups:The text was updated successfully, but these errors were encountered: