Skip to content

Latest commit

 

History

History
245 lines (163 loc) · 6.63 KB

README.md

File metadata and controls

245 lines (163 loc) · 6.63 KB

DATA PATCH - DMS BACKEND PROJECT

A simple boilerplate of an API server for CRUD operations, powered by FastAPI framework with a PostgreSQL database.

This work in progress is mainly the result of studying tutorials from official documentation, the amazing fullstack FastAPI-PostgreSQL boilerplate, and this tutorial playlist by MK Fast on youtube.

A frontend interface is currently in development here

Features

The current goal is to make it work with the following generic features, so it could be adapted for later purposes :

  • PostgreSQL database for storing large volumes of data, keeping track of relations, and making queries in a simple way (easier than MongoDB at least) ;
  • Oauth2 authentication for securiity and users management ;
  • SocketIO endpoints for collaborative work ;
  • Email notifications for password changes ;
  • Static files server for avatars and so...
  • Testing wiith Pytest, for development ;
  •  CORS implementation, to serve as SAAS API server ;
  • Dockerisation, for dev purposes...

Datamodel & documentation

Check more about the endgame here.


INSTALLATION

Dependencies - PostgreSQL

  • ubuntu
sudo apt-get install postgresql postgresql-contrib
sudo -u postgres psql -c "SELECT version();"
  • mac OS

cf : https://gist.github.com/ibraheem4/ce5ccd3e4d7a65589ce84f2a3b7c23a3 cf : https://www.codementor.io/@engineerapart/getting-started-with-postgresql-on-mac-osx-are8jcopb cf : https://flaviocopes.com/postgres-how-to-install/

brew doctor
brew update
brew install postgres
brew services start postgresql

Dependencies - Python

We use pipenv as package manager :

pip install --user --upgrade pipenv

Install dependencies :

pipenv install --dev
# or
pipenv install --system --dev

or

pipenv install --three python-dotenv fastapi uvicorn sqlalchemy  sqlalchemy-utils pydantic[email] psycopg2 alembic python-multipart python-jose[cryptography] passlib[bcrypt] aiofiles python-socketio requests inflect shutil pytest

To print requirements :

pipenv shelll
pipenv run pip freeze

To reead more on why pipenv or here in french...


Environment variables

The .env file

The environment variables must be stored in the .env file at the root of the repo. It contains confidential values such as : postgres url with password, email smtp values, JWT secret keys...

A template - example.env - is present at the root. You can copy-paste it and customize it wiith your own values.

# from repo root
cp example.env .env

Create secure random keys

To generate random secret keys you can use openssl command line or - if you feel lazy - password generator website.

openssl rand -hex 32

... and copy-paste the key as JWT_SECRET_KEY in your .env file


RUNNING APP

Run the app

Once you have cloned the repo and installed dependencies (postgreSQL server, and python packages with pipenv) you can run the app with this command lines :

pipenv shell
alembic upgrade head
pipenv run uvicorn sql_app.main:app --reload
# or
uvicorn sql_app.main:app --reload

then open the following url in your browser http://localhost:8000/api/docs

You should have something like this :

screenshot-api-1

Pre-run the app (optional)

If you want to pre-populate your database with some default values (superuser, licences, ...) you can run the following :

pipenv shell

# to run the database
python ./sql_app/backend_pre_start.py

# to init the database with values
alembic upgrade head
python ./sql_app/initial_data.py

Migrations

We use Alembic for migrations.

alembic revision --autogenerate -m "<Migration message>"
alembic upgrade head

cf : https://alexvanzyl.com/posts/2020-05-24-fastapi-simple-application-structure-from-scratch-part-2/


Tests

We use Pytest for testing (see also FastAPI pytest tutorial).

pytest

Endgame

Roadmap

To know more about this project's endgame you can check the roadmap.

Versions log

  • current version : v.0.1 beta

  • V.0 (boilerplate) :

    • 2021/03/28 - v.0.1 beta : basic setups / unfinished setups

Datamodel (goal)

The current goal is to be able to agnostically manage datasets in the same way Baserow or Airtable can. Such datamodel should include the following concepts :

.
├── <user related>
│   ├── users (usual infos as email, pwd, ... + groups + owned or shared datasets/)
│   ├── groups (groups of users + auth levels attributions)
│   ├── comments (similar to a notification)
│   └── invitations
│
├── <data related>
│   ├── workspaces (collection of datasets)
│   ├── datasets (dataset's metadata, collection of tables)
│   ├── tables (table's metadata)
│   ├── table_data (exploded dataset data, perhaps as many sql tables there is of created datasets)
│   ├── fields (field descriptions that could be used in several datasets)
│   └── schemas (collections of fields)

The following illustration gives an idea of the endgame datamodel we aim for....

datamodel