Skip to content

Latest commit

 

History

History
119 lines (88 loc) · 6.26 KB

Maintenance.md

File metadata and controls

119 lines (88 loc) · 6.26 KB

Maintenance

This document contains information regarding maintenance for the web server.

Table of Contents

  1. Container Management
  2. Database Management
  3. Migrating Database Content
    1. Amazon RDS
    2. Docker Volumes
  4. SSL Certification

Container Management

This installation is configured to work with Portainer, a web-based container management GUI. It's like Docker Desktop, but in a web browser.

The service is launched automatically alongside the rest of the web server container. To access, navigate to https://<Domain Name>:9443

Database Management

The database can be accessed through MySQL Workbench (or another MySQL access tool) through SSH with the following settings:

  • Connection Method: Standard TCP/IP over SSH
  • SSH Hostname: The hostname of the EC2 instance (i.e. ec2...amazonaw.com)
  • SSH Username: Username of account on EC2 instance (i.e. ubuntu)
  • SSH Key File: The key file path of the credentials needed to log into the EC2 instance (i.e. /path/to/dev/key.pem)
  • MySQL Hostname: The hostname of the RDS instance (i.e. q2a...rds.amazon.com)
  • MySQL Server port: 3306
  • Username: admin
  • Password: Whatever you configured as the RDS password

More information can be found here.

Migrating Database Content

Note: Some site-specific information, such as UI changes, custom pages, etc. is stored in the database. Thus, when migrating data, be sure to back up any of these changes to ensure they will not be lost forever!

Migrating data to the server depends on which database is implemented.

Amazon RDS

The site is likely configured to use an AWS RDS database. Thus, the following process will allow you to import new data:

  1. Getting a dump of the desired data from your local machine.

    1. View your database from MySql Workbench.
    2. Go to Server > Data Export
    3. Select the schema with the desired tables (if you don’t want to overwrite the tables storing site configuration do not select qa_options and qa_pages).
    4. Choose Export to Self-Contained File
    5. Start Export
  2. Moving the dump to the EC2 instance.

    1. Before you do this make sure you can SSH (secure shell) into the EC2 instance.
    2. Use scp to send the dump file to the EC2 instance
      • scp -i <your .pem key> <local dump file> <user>@<Elastic IP>:<destination of dump file>
      • Example: scp -i q2a_intern_key Dump20220701.sql [email protected]:~/dumps/Dump20220701.sql
  3. Importing the data to the RDS instance.

    1. Now that the dump is on the EC2 instance we can import the data to RDS.
    2. Connect to the EC2 instance via ssh.
      • ssh -i <your .pem key> ubuntu@<Elastic IP>
    3. Make sure mysql is installed
      • sudo apt install -y mysql-client-core-8.0
    4. (Optional) Verify your connection to the database works, for example:
      • mysql -h q2a-db-test.cmnnis04whwr.us-east-1.rds.amazonaws.com -P 3306 -u admin -p
      • You can run \q to exit the MySQL connection
    5. Import the file into RDS, for example:
      • mysql -h q2a-db-test.cmnnis04whwr.us-east-1.rds.amazonaws.com -P 3306 -u admin -p q2adb < Dump20220701.sql

Docker Volumes

In the event that the site is using a MySQL Docker container, the following steps will allow you to import new data:

  1. On the host of the source of the database, archive the _data folder located in /var/lib/docker/volumes/<db container volume>/
    1. <db container volume> is either app_q2a_db_volume or assure_support_site_q2a_db_volume
  2. Ensure the destination machine has enough storage capacity for the new database. If not, increase its disk space.
  3. Transfer the archive to the destination machine (gdown for downloading from Google Drive)
  4. Stop all running containers.
  5. Create a backup of /var/lib/docker/volumes/app_q2a_db_volume/_data on the destination machine.
    1. sudo mv /var/lib/docker/volumes/app_q2a_db_volume/_data ./_data_db_backup
  6. Unzip the archive so that the new _data volume is located in place of the folder you just backed up.
  7. Re-start containers and ensure that all data was transported successfully.

SSL Certification

Please note that this requires a custom domain name to bet set up. Before attempting this, ensure that HTTPS traffic is not yet allowed by navigating to https://<Domain Name> in your web browser. Do not attempt the following steps if the site is already certified.

VERY IMPORTANT NOTE: If you are developing, be sure to include the --dry-run flag when running certbot, otherwise you will be rate limited! For production, simply omit this flag.

  1. Ensure that the web server is live (EC2 instance & Docker containers).

  2. Connect to the EC2 instance (ssh -i </path/to/key.pem> <user>@<domain name>)

  3. Once connected, run the following two commands:

    # Install certbot & apache plugin (if needed)
    apt-get install -y certbot
    
    # Run certbot interactively
    certbot --dry-run --webroot
  4. Follow the prompts

  5. Alternative, running certbot can be run in "non-interactive" mode:

    certbot certonly --dry-run \  # Just generate the cert files
         --non-interactive \
         --agree-tos \            # Automatically agree to the ToS
         --expand \               # Append new domains
         -m $ADMIN_EMAIL \        # Email to contact about renewal
         --webroot -w $WEBROOT \  # Root dir of the website
         -d $DOMAIN_NAME \        # Each domain you want to certify
         -d www.$DOMAIN_NAME
  6. If successful, you will see a message stating that the site is now certified

  7. Navigate to https://<Domain Name> and verify that HTTPS traffic is allowed

More information can be found here.