-
Notifications
You must be signed in to change notification settings - Fork 13
Deployment
Anya Petersen edited this page Mar 5, 2021
·
17 revisions
Alfalfa is a containerized application that can be deployed via docker-compose, Docker Swarm, or Kubernetes. Experience deploying and troubleshooting Docker applications is a pre-requisite.
- All deployment instructions here use environment variables that are committed to the repo. Where the environment variable represents an authentication credential, you should update the files with your secret values before deploying.
- As configured in these examples, none of the deployed filesystems are persistent. This means that restarting any container will reset its state, including related databases and uploaded files.
- Unlike your average web application, Alfalfa is heavily dependent on in-memory state. Underlying building models must run in memory. This means that deployments are not resilient to worker restarts; if a worker is restarted while a model is running, that model state will be lost.
Alfalfa includes optional Historian functionality, which is enabled by setting the HISTORIAN_ENABLE=true
environment variable for the worker container and adding two containers to the stack:
- InfluxDB for timeseries data storage
- Grafana for visualization
- Note that in images published to DockerHub, the datasource has been provisioned to connect to InfluxDB using the username and password in the repo's .env file. If you have updated these (as you should) for your deployment, you will also need to manually update them via the Grafana UI, which can be accessed at
<grafana-host>/datasources/edit/1/
- All recorded data points use UTC, which is also the default for chart display. This means that plot filters like "past 6 hours" are relative to current UTC time.
This assumes that you have installed Docker and docker-compose and are in the root of this repo.
- update .env file: replace any credentials with real values and modify
VERSION_TAG
if appropriate to a tag available on DockerHub. To build the containers yourself, docker thepull
command and append--build
to theup
commands below. - Without Historian:
docker-compose pull && docker-compose up -d
- With Historian:
export HISTORIAN_ENABLE=true && docker-compose -f docker-compose.yml -f docker-compose-historian.yml pull && docker-compose -f docker-compose.yml -f docker-compose-historian.yml up -d
- Visit the web server at http://localhost
- Visit the minio (file upload) server at http://localhost:9000
- Visit the historian (if applicable) at http://localhost:3000.
- Clean up with
docker-compose down
ordocker-compose -f docker-compose.yml -f docker-compose-historian.yml down
TODO
TODO
- Getting Started with Model Measures Part 1: Creating Inputs and Outputs
- Getting Started with Model Measures Part 2: Creating Actuators
- Getting Started with EnergyPlus Measures Part 1: Creating Inputs and Outputs
- Getting Started with EnergyPlus Measures Part 2: Creating Actuators
- How to Configure an OpenStudio Model
- How to Configure Measures for Use with Alfalfa Ruby Gem
- How to Create Inputs and Outputs With Measures
- How to Run URBANopt Output Models in Alfalfa
- How to Migrate EnergyPlus Python Plugins
- How to Integrate Python based Electric Vehicle Models with OpenStudio Workflows
- How to Locally Test OpenStudio Models
- Required Structure of OpenStudio Workflow
- List of Automatically Generated Energyplus Points
- Alfalfa EnergyPlus Mixin Methods
- Getting Started with Uploading and Running a Model Using Python
- Getting Started with Uploading and Running a Model Using the UI
- How to Install Alfalfa Client
- How to Preprocess and Upload a Model
- How to Step Through a Simulation
- How to View Historical Data in Grafana
- How to Configure an Alias
- How to Troubleshoot Models