Skip to content

Commit

Permalink
Merge pull request #47 from CSSE6400/dev
Browse files Browse the repository at this point in the history
Dev
  • Loading branch information
Mohamad11Dab authored May 31, 2024
2 parents 7fe76e3 + 7cf75e7 commit b65151f
Show file tree
Hide file tree
Showing 61 changed files with 2,378 additions and 1,132 deletions.
21 changes: 18 additions & 3 deletions .github/workflows/workflow.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,21 @@ jobs:
POSTGRES_PASSWORD: verySecretPassword
ports:
- 5432:5432
# Options to keep the container running until the end of the job
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis:
image: redis:latest
ports:
- 6379:6379
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v2
Expand All @@ -41,10 +50,16 @@ jobs:
python -m pip install --upgrade pip
pip install -r application/requirements.txt
- name: Simulate Matching Worker
run: |
export CELERY_BROKER_URL=redis://localhost:6379
export SQLALCHEMY_DATABASE_URI=postgresql+psycopg://administrator:verySecretPassword@localhost:5432/ride
cd application/app
celery --app tasks.celery_app worker --loglevel=info -Q matching.fifo &
- name: Run tests
run: |
export SQLALCHEMY_DATABASE_URI=postgresql+psycopg://administrator:verySecretPassword@localhost:5432/ride
export CELERY_BROKER_URL=redis://localhost:6379
cd application/app/tests
python -m unittest discover
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -275,3 +275,5 @@ application/data/

application/app/frontend/node_modules/
node_modules
aws/
api.txt
104 changes: 104 additions & 0 deletions .terraform.lock.hcl

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

98 changes: 95 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,61 @@
# P01-RidingShare

## Current Scripts Available
# Deploying the Application

## Installing the Environment
To deploy the application both Terraform and the AWS cli tools must be installed.

Some installation scripts have been provided to streamline the process for specific OSes.

The following script installs the Terraform cli tool using the apt package manager. If you use something other than apt, follow the installation guide link.
```shell
./install_terraform.sh
```

The following script installs the AWS cli tool. It supports both Linux and MacOS installation.
```shell
./install_aws.sh
```

The links for installation documentation can be found below.
- [Terraform Installer](https://developer.hashicorp.com/terraform/install)
- [AWS cli Installer](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)


## Self-Hosted API Tools
This application makes usage of a self-hosted Routing Engine [(OSRM)](https://project-osrm.org/) and Geocoding tool [(Nominatim)](https://nominatim.org/).

This is achieved by running these tools in an AWS EC2 instance that is controlled by the development pipeline.

Further information on this can be found below in the [Self-Hosted Deployment Commands](#Self-Hosted-Deployment-Commands) section.

## Deploying the Application

### Credentials
To deploy the application your AWS credentials must be supplied. Running the `install_aws.sh` script setups up the appropriate environment and creates a blank credentials file.
The credentials file must export your credentials as environment variables such as below.
```shell
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=...
```

### Deployment
Once the credentials are properly stored run the deployment script.
```shell
./deploy.sh
```

## Tearing Down the Application
Similarly to the deployment phase, your AWS credentials are required, and there exists a teardown script.

Simply supply your credentials as per above and the run the following teardown script.
```shell
./teardown.sh
```


## Additional Local Scripts Available

### Deploy in local container
The default option to run the application locally is through docker compose. A provided script makes the deployment easy.
Expand Down Expand Up @@ -32,5 +87,42 @@ Run the below script from the top directory:

---

## Google Drive Link:
Including Meeting Nodes: **https://drive.google.com/drive/folders/1KTdEoMaBiBy9DyV_FiBjfzwiMvqolhYX?usp=drive_link**
## Self-Hosted Deployment Commands

To recreate the full test suite the self-hosted tools are deployed seperately. This was done as to ensure they remain paritioned even when tearing down the full application, as there is significant performance and time overheads on their initial creation.

1. Parition an EC2 instance on AWS.
Create a new EC2 instance on AWS cloud, our instance is a `t3.large` with a `60GB` volume.


2. Run the following 5 commands on the new instance.
**Note:** The first command can take a while to parition and transform street data.

```Docker
docker run -it --shm-size=4g \
-e PBF_URL=https://download.geofabrik.de/australia-oceania/australia-latest.osm.pbf \
-e REPLICATION_URL=http://download.geofabrik.de/australia-oceania/australia-updates/ \
-e IMPORT_WIKIPEDIA=false \
-e NOMINATIM_PASSWORD=very_secure_password \
-v nominatim-data:/var/lib/postgresql/14/main \
-p 8080:8080 \
--name nominatim \
mediagis/nominatim:4.4
docker run -t -v "${PWD}:/data" ghcr.io/project-osrm/osrm-backend osrm-extract -p /opt/car.lua /data/australia-latest.osm.pbf || echo "osrm-extract failed"
docker run -t -v "${PWD}:/data" ghcr.io/project-osrm/osrm-backend osrm-partition /data/australia-latest.osrm || echo "osrm-partition failed"
docker run -t -v "${PWD}:/data" ghcr.io/project-osrm/osrm-backend osrm-customize /data/australia-latest.osrm || echo "osrm-customize failed"
docker run -t -i --name OSRM -p 5000:5000 -v "${PWD}:/data" ghcr.io/project-osrm/osrm-backend osrm-routed --algorithm mld /data/australia-latest.osrm
```

3. Add the instance ID to the deployment pipeline.
To link the deployment pipeline to your own EC2 instance simply update the instance ID.
Inside the `/terraform/hosted_apis.tf` file update the second line to your instance_id, as below.
```json
data "aws_instance" "hosted_apis" {
instance_id = "YOUR_INSTANCE_ID"
}
```
4 changes: 3 additions & 1 deletion application/.dockerignore
Original file line number Diff line number Diff line change
@@ -1 +1,3 @@
__pycache__/
__pycache__/
frontend/build/
frontend/node_modules/
27 changes: 21 additions & 6 deletions application/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,19 +1,34 @@
FROM pypy:3.9-slim

# Installing dependencies for running a python application
# Installing dependencies for running the application
RUN apt-get update && apt-get install -y libpq-dev postgresql-common gcc libgeos++-dev libgeos-3.9.0 libgeos-c1v5 libgeos-dev libgeos-doc
RUN apt-get install -y openssl libcurl4-nss-dev libssl-dev curl

# Install NVM to manage installing npm 20 and Yarn
ENV NVM_DIR "$HOME/.nvm"
RUN mkdir -p ${NVM_DIR}
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
RUN /bin/bash -c "source ~/.bashrc && nvm install 20 && npm install --global yarn"

# Setting the working directory
WORKDIR /app

RUN pypy3 -m pip install --extra-index-url https://antocuni.github.io/pypy-wheels/manylinux2010 numpy

# Install pipenv dependencies
# Install pipenv dependencies (Uses some separate precompiled wheels to avoid compiling)
RUN pypy3 -m pip install --prefer-binary --extra-index-url https://pypy.kmtea.eu/simple pycurl==7.43.0 numpy
COPY requirements.txt ./
RUN pip install -r requirements.txt
RUN pypy3 -m pip install -r requirements.txt

# Copy our AWS credentials and construct an entrypoint script to source them
COPY credentials ./
RUN printf '#!/bin/sh\n. ./credentials\nexec "$@"' > entrypoint.sh
RUN chmod +x entrypoint.sh

# Copying our application into the container
COPY app /app

# Running our application
# Build the frontend with yarn
RUN /bin/bash -c "source ~/.bashrc && cd frontend && yarn install && yarn build"

# Running our application + entrypoint script
ENTRYPOINT ["./entrypoint.sh"]
CMD ["gunicorn", "--config", "gunicorn_config.py", "wsgi:app", "--preload"]
32 changes: 23 additions & 9 deletions application/app/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ def create_app(config_overrides=None):

# Create the database tables
with app.app_context():
add_postgis_extension()
db.create_all()
db.session.commit()
db.engine.dispose()
Expand All @@ -59,13 +60,26 @@ def serve(path):


def celery_init_app(app: Flask) -> Celery:
class FlaskTask(Task):
def __call__(self, *args: object, **kwargs: object) -> object:
with app.app_context():
return self.run(*args, **kwargs)
class FlaskTask(Task):
def __call__(self, *args: object, **kwargs: object) -> object:
with app.app_context():
return self.run(*args, **kwargs)

celery_app = Celery(app.name, task_cls=FlaskTask)
celery_app.config_from_object(app.config["CELERY"])
celery_app.set_default()
app.extensions["celery"] = celery_app
return celery_app
celery_app = Celery(app.name, task_cls=FlaskTask)
celery_app.config_from_object(app.config["CELERY"])
celery_app.set_default()
app.extensions["celery"] = celery_app
return celery_app


def add_postgis_extension():
from models import db
from sqlalchemy.sql import text
try:
db.session.execute(text("CREATE EXTENSION IF NOT EXISTS postgis;"))
db.session.commit()
except Exception as e:
db.session.rollback()
print("Error adding PostGIS extension:", str(e))
finally:
db.session.close()
5 changes: 5 additions & 0 deletions application/app/frontend/src/App.css
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,12 @@ body {
from {
transform: rotate(0deg);
}

to {
transform: rotate(360deg);
}
}

.leaflet-routing-container {
display: none !important;
}
Loading

0 comments on commit b65151f

Please sign in to comment.