SEC Filings have always been a huge source of information for investors. We are trying to automate and analyze filings and generate insights in context of SAAS companies
A React-Django Based ML Web App
- Git.
- Node & npm (version 12 or greater).
- A fork of the repo.
- Python3 environment to install Django and its dependencies
- PyTorch
The following is a high-level overview of relevant files and folders.
backend/
├── dashboard_apis/
│ ├── core/
│ ├── dashboard_apis/
| ├── dataentry.py
| ├── dbsqlite3
| ├── manage.py
| └── requirements.txt
|── analytics/
| |──given data/
| |──source code/
| | |__ python source code files for edgar scraping/
| | |__ python source code files for metrics calculations/
| | |__ python source code files for miscellaneous work/
| | |__ python source code files for outside sources scraping/
| | |__ python source code files for text analysis - NLP/
| |
└── frontend/
├── public/
│ ├── index.html
│ └── ...
├── src/
│ ├── actions/
│ │ ├── actions.js
│ ├── Components/
│ │ ├── Global
│ │ └── Widgets
│ ├── constants/
│ ├── fonts/
│ ├── constants/
│ ├── images/
│ ├── Pages/
| ├── BasketList/
| ├── Company/
| ├── Error404/
| ├── Filenew/
| ├── Files/
| ├── IndividualBasket/
| ├── Landing/
| ├── RecentlyViewed/
| └── Search/
| ├── reducers/
| ├── utils/
| ├── App.js
| ├── config.js
| ├── global.scss
| ├── index.js
| ├── registerServiceWorker.js
| └── store.js
├── package-lock.json
├── package.json
├── README.md
├── yarn.lock
└── .gitignore
In order to install all packages follow the steps below:
- Move to backend folder
- Then move into the dashboard_apis folder
- For installing virtual environment -
python3 -m pip install --user virtualenv
- Create A Virtual env -
python3 -m venv env
- Activate virtual env -
source env/bin/activate
pip3 install -r requirements.txt
python manage.py runserver localhost:8000
- Move to frontend folder
- Move into dashboard_frontend
npm install
npm start
- Move to backend folder
- Move into analytics folder
- Move into analytics folder
- Move into folder python source code files for edgar scraping
- Move into folder directory_for_scraping_edgar_metadata
- Install dependencies via
pip install -r requirements.txt
- Before running any script, you can edit the
config.json
file to adjust parameters. - To download financial reports from EDGAR, run `python edgar_crawler.py
- To clean and extract specific item sections from already-downloaded 10-K documents, run
python extract_items.py
. Note : All folders in analytics folder are named according to the python files they contain.
The model will be served on http://localhost:8000/