Skip to content

Commit

Permalink
Merged in documentation (pull request #15)
Browse files Browse the repository at this point in the history
Documentation
  • Loading branch information
brodavi authored and Sam Faynzilberg committed Feb 8, 2021
2 parents 21c666b + 126d755 commit 8522af7
Show file tree
Hide file tree
Showing 6 changed files with 187 additions and 55 deletions.
66 changes: 52 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,19 +95,18 @@ GOOGLE_CLIENT_SECRET=somegoogleclientsecret
JWT_SECRET_KEY=somejwtsecretkey
```

## Getting started with Docker (to be updated when returning to github!!!!)
## Getting started with Docker

If you have docker and docker-compose installed, you should be able to get started fairly quickly, following these steps:

1. ```$ cp docker-compose.yml.local.example docker-compose.yml```
2. ```$ docker-compose build``` to build the docker containers
3. and then ```$ docker-compose up``` to run the project
`$ cp docker-compose.yml.local.example docker-compose.yml`
`$ docker-compose build` to build the docker containers

## Getting started without Docker (to be updated when returning to github!!!!)
## Getting started without Docker

1. You will need [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git), [Python 3](https://docs.python.org/3/using/index.html) (>= 3.8) installed. You will need [Postgres](https://www.postgresql.org/) and [Redis](https://redis.io/) running.
You will need [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git), [Python 3](https://docs.python.org/3/using/index.html) (>= 3.8) installed. You will need [Postgres](https://www.postgresql.org/) and [Redis](https://redis.io/) running.

2. Python 3.8
Python 3.8
```sh
$ pipenv shell
# Or assuming you have multiple versions installed use the following
Expand All @@ -116,27 +115,66 @@ $ pipenv --python /Users/sam/.pyenv/versions/3.8.6/bin/python shell
$ pip install -r requirements.txt
```

3. Database Setup
You'll need to have postgres running. You'll want a valid connection string contained in `api/.env` for `DATABASE_URL`. Using `pipenv shell` run the following to apply existing migrations:
## Database Creation
You will need to have postgres running and you will need the psql program.
```sh
$ psql -h 0.0.0.0 -p 5432 -U postgres
postgres=# CREATE DATABASE drawdown;
```

## Running the project with Docker
`$ docker-compose up` to run the project

## Running the project without Docker

### Without docker, you will need to do a database setup

You will need to have postgres running. You will want a valid connection string contained in `api/.env` for `DATABASE_URL`. Using `pipenv shell` run the following to apply existing migrations:
```sh
$ alembic upgrade head
```

4. Schema Updates
And finally, to run the API:
```sh
$ uvicorn api.service:app --reload
```

## Schema Updates
When changing models in `api/db/models.py` run the following to create migrations:
```sh
$ alembic revision -m "add provider column" --autogenerate
```

Apply changes
Note: if you are not using docker-compose, you will need to manually run:

```sh
$ alembic upgrade head
```

5. To run the API
```sh
$ uvicorn api.service:app --reload
### Initializing the data

To create the default workbooks, use the `GET /initialize` endpoint. This will generate a variety of data, including the 3 Drawdown canonical workbooks. This will also load some CSVs into the database for easy retrieval, and provide the data for the `/resource/{path}` endpoints.

To improve performance for the app, it is recommended you run the `GET /calculate` endpoint for the 3 canonical workbooks as a first step, as this will cache results for all the technologies for the workbooks. Any variation updates, when calculated, will take advantage of this initial cache as much as possible.

### Some gotchas

When updating variation values, you may find you need to update like this:

```
"some_solution_variable": 1234.56
```

And other times, you may find you need to update the value like this:

```
"some_different_variable": {
"value": 1234.56,
"statistic": "mean"
}
```

This matches the existing .json files extracted from the original excel files. At some point, we may want to normalize all values to the second option, but for now just keep in mind you will need to know when to use which. You can access the `resource/scenario/{id}` endpoint to determine which format to send.

---

Expand Down
19 changes: 15 additions & 4 deletions api/routers/routes/account.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,23 +20,34 @@ async def for_dev_only_will_remove2(code: str, db: Session = Depends(get_db)):
body = AuthorizationResponse(code=code, state=0)
return await verify_authorization(body, default_provider, db)

@router.get('/login')
@router.get('/login',
summary="Get login url from default provider"
)
def get_login_url_default() -> Url:
return get_login_url(default_provider)

@router.get('/login/{provider}')
@router.get('/login/{provider}',
summary="Get login url from given provider"
)
def get_login_url(provider: str) -> Url:
importname = 'api.routers.providers.' + provider
provider_module = importlib.import_module(importname)
return provider_module.login_url()

@router.post('/authorize/{provider}')
@router.post('/authorize/{provider}',
summary="Get jwt for a given provider",
description="""
The client must provide the auth code from the oauth provider.
"""
)
async def verify_authorization(body: AuthorizationResponse, provider: str, db: Session = Depends(get_db)) -> Token:
importname = 'api.routers.providers.' + provider
provider_module = importlib.import_module(importname)
return await provider_module.exchange_code(body, db)

@router.post("/refresh-token/{provider}")
@router.post("/refresh-token/{provider}",
summary="Creates a new jwt using the refresh oauth api."
)
async def refresh_token(provider: str, refresh_token: str = Depends(get_refresh_token_from_header), db: Session = Depends(get_db)):
importname = 'api.routers.providers.' + provider
provider_module = importlib.import_module(importname)
Expand Down
22 changes: 16 additions & 6 deletions api/routers/routes/projection.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,9 @@
router = APIRouter()
default_provider = settings.default_provider

@router.get("/projection/technology/{technology_hash}", response_model=schemas.Calculation)
@router.get("/projection/technology/{technology_hash}", response_model=schemas.Calculation,
summary="Return the result of a calculation for the given technology hash"
)
async def get_tech_result(
technology_hash: str,
cache: aioredis.Redis=Depends(fastapi_plugins.depends_redis)):
Expand All @@ -21,7 +23,9 @@ async def get_tech_result(
except:
raise HTTPException(status_code=400, detail=f"Cached results not found: GET /calculate/... to fill cache and get new projection url paths")

@router.get("/projection/diffs/{technology_hash}", response_model=schemas.CalculationDiffs)
@router.get("/projection/diffs/{technology_hash}", response_model=schemas.CalculationDiffs,
summary="Return the diff of a calculation for the given technology hash and the previous calculation."
)
async def get_delta(
technology_hash: str,
cache: aioredis.Redis=Depends(fastapi_plugins.depends_redis)):
Expand All @@ -30,7 +34,9 @@ async def get_delta(
except:
raise HTTPException(status_code=400, detail=f"Cached results not found: GET /calculate/... to fill cache and get new projection url paths")

@router.get("/projection/calculation/{key}", response_model=schemas.CalculationResults)
@router.get("/projection/calculation/{key}", response_model=schemas.CalculationResults,
summary="Returns a previously run result of the `GET /calculate` endpoint."
)
async def get_projection_run(
key: str,
cache: aioredis.Redis=Depends(fastapi_plugins.depends_redis)):
Expand All @@ -39,7 +45,9 @@ async def get_projection_run(
except:
raise HTTPException(status_code=400, detail=f"Cached results not found: GET /calculate/... to fill cache and get new projection url paths")

@router.get("/projection/summary/{key}")
@router.get("/projection/summary/{key}",
summary="Returns the co2_mmt_reduced for a previous run of the `GET /calculate` endpoint for all technologies."
)
async def get_projection_summary(
key: str,
cache: aioredis.Redis=Depends(fastapi_plugins.depends_redis)):
Expand All @@ -58,9 +66,11 @@ async def get_projection_summary(

return summary

@router.get("/technology/meta_info/{technology}")
@router.get("/technology/meta_info/{technology}",
summary="Returns the metadata for technology: ad_data_sources, tam_pds_data_sources, pds_ca_data_sources, ref_ca_data_sources."
)
async def technology_meta_info(
technology: str,
technology: str,
db: Session = Depends(get_db),
cache: aioredis.Redis=Depends(fastapi_plugins.depends_redis)):

Expand Down
49 changes: 36 additions & 13 deletions api/routers/routes/resource.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
from api.queries.resource_queries import (
get_entity,
get_entities_by_name,
save_entity,
save_entity,
all_entities,
all_entity_paths,
clone_variation,
Expand All @@ -25,8 +25,8 @@
save_workbook
)
from api.transform import (
transform,
rehydrate_legacy_json,
transform,
rehydrate_legacy_json,
populate,
convert_vmas_to_binary
)
Expand Down Expand Up @@ -58,31 +58,47 @@ class EntityName(str, Enum):
ca_ref = "ca_ref"


@router.get('/resource/vma/info/{technology}')
@router.get('/resource/vma/info/{technology}',
summary="Get the VMA info for a given technology",
description="VMA info exists when VMA sources cannot be viewed. Note that there may not be existing VMA info for every technology."
)
async def get_vma_info(technology: str, db: Session = Depends(get_db)):
return get_entities_by_name(db, f'solution/{technology}/VMA_info.csv', models.VMA)

@router.get('/resource/vma/all/{technology}')
@router.get('/resource/vma/all/{technology}',
summary="Get all the VMA data for a given technology",
description="Returns all VMA data for a technology"
)
async def get_vma_all(technology: str, db: Session = Depends(get_db)):
return get_entities_by_name(db, f'solution/{technology}/%.csv', models.VMA)

@router.get('/resource/{entity}/{id}', response_model=schemas.ResourceOut)
@router.get('/resource/{entity}/{id}', response_model=schemas.ResourceOut,
summary="Get resource entity by id"
)
async def get_by_id(entity: EntityName, id: int, db: Session = Depends(get_db)):
return get_entity(db, id, entity_mapping[entity])

@router.get('/resource/{entity}', response_model=List[schemas.ResourceOut])
@router.get('/resource/{entity}', response_model=List[schemas.ResourceOut],
summary="Get resource entity by name"
)
async def get_by_name(entity: EntityName, name: str, db: Session = Depends(get_db)):
return get_entities_by_name(db, name, entity_mapping[entity])

@router.get('/resource/{entity}s/full', response_model=List[schemas.ResourceOut])
@router.get('/resource/{entity}s/full', response_model=List[schemas.ResourceOut],
summary="Get the full resource (all data) of an entity by name"
)
async def get_all(entity: EntityName, db: Session = Depends(get_db)):
return all_entities(db, entity_mapping[entity])

@router.get('/resource/{entity}s/paths', response_model=List[str])
@router.get('/resource/{entity}s/paths', response_model=List[str],
summary="Get the resource paths (no data) of an entity by name"
)
async def get_all_paths(entity: EntityName, db: Session = Depends(get_db)):
return all_entity_paths(db, entity, entity_mapping[entity])

@router.post('/variation/fork/{id}', response_model=schemas.VariationOut)
@router.post('/variation/fork/{id}', response_model=schemas.VariationOut,
summary="Fork a variation with given id"
)
async def fork_variation(id: int, patch: schemas.VariationPatch, db: Session = Depends(get_db)):
try:
cloned_variation = clone_variation(db, id)
Expand All @@ -93,7 +109,7 @@ async def fork_variation(id: int, patch: schemas.VariationPatch, db: Session = D
cloned_variation.data['scenario_parent_path'] = patch.scenario_parent_path
if patch.scenario_parent_path is not None:
cloned_variation.data['reference_parent_path'] = patch.reference_parent_path
if patch.scenario_vars is not None:
if patch.scenario_vars is not None:
cloned_variation.data['scenario_vars'] = patch.scenario_vars
if patch.reference_vars is not None:
cloned_variation.data['reference_vars'] = patch.reference_vars
Expand All @@ -102,7 +118,10 @@ async def fork_variation(id: int, patch: schemas.VariationPatch, db: Session = D

return save_variation(db, cloned_variation)

@router.post('/variation', response_model=schemas.VariationOut)
@router.post('/variation', response_model=schemas.VariationOut,
summary="Create a new variation",
description="Note: the variation is not automatically added to the workbook. Use `POST /workbook/{id}/variation` to add a variation to a workbook."
)
async def post_variation(variation: schemas.VariationIn, db: Session = Depends(get_db)):
new_variation = models.Variation(
name = variation.name,
Expand All @@ -115,7 +134,10 @@ async def post_variation(variation: schemas.VariationIn, db: Session = Depends(g
new_variation.data['vma_sources'] = variation.vma_sources
return save_variation(db, new_variation)

@router.get("/initialize")
@router.get("/initialize",
summary="Initialize the database with data",
description="Puts default scenario, reference, VMA, etd data into the database. Creates corresponding workbook for each scenario. In production, initialization is only allowed once."
)
async def initialize(db: Session = Depends(get_db)):
if db.query(models.VMA).count() > 0:
if settings.is_production:
Expand Down Expand Up @@ -179,3 +201,4 @@ async def initialize(db: Session = Depends(get_db)):
)
db.add(vma_csv)
db.commit()

19 changes: 14 additions & 5 deletions api/routers/routes/vma.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,9 @@
router = APIRouter()
default_provider = settings.default_provider

@router.get("/vma/mappings/{technology}")
@router.get("/vma/mappings/{technology}",
summary="Get VMA mappings for a given technology"
)
async def get_vma_mappings(technology: str, db: Session = Depends(get_db)):
paths = varProjectionNamesPaths + varRefNamesPaths
importname = 'solution.' + technology
Expand All @@ -42,11 +44,15 @@ async def get_vma_mappings(technology: str, db: Session = Depends(get_db)):
})
return result

@router.get("/vma_csv/{id}")
@router.get("/vma_csv/{id}",
summary="Retrieve a VMA CSV with the given id"
)
async def get_vma_csv(id: str, db: Session = Depends(get_db)):
return db.query(models.VMA_CSV).get(id)

@router.post("/vma_csv")
@router.post("/vma_csv",
summary="Upload a custom VMA CSV"
)
async def post_vma_csv(
name: str = Form(...),
technology: str = Form(...),
Expand All @@ -66,13 +72,16 @@ async def post_vma_csv(
db.refresh(vma_csv)
return vma_csv

@router.get("/vma/calculation")
@router.get("/vma/calculation",
summary="Get VMA calculation",
description="For a given variable, calculate the VMA values from the corresponding CSVs. This will return low, mean, and high values for the variable, as well as the source name and path"
)
async def calculate_vma_groupings(
variable: str,
stat_correction: bool,
use_weight: bool,
db: Session = Depends(get_db)):

vma_csvs = db.query(models.VMA_CSV).filter(
models.VMA_CSV.variable==variable
).all()
Expand Down
Loading

0 comments on commit 8522af7

Please sign in to comment.