-
Notifications
You must be signed in to change notification settings - Fork 13
Deployment
Alfalfa is a containerized application that can be deployed via docker-compose, Docker Swarm, or Kubernetes. Experience deploying and troubleshooting Docker applications is a pre-requisite.
- All deployment instructions here use environment variables that are committed to the repo. These are set to allow local deployments to work seamlessly out of the box, but it is your responsibility to update any sensitive variables (ie credentials) before deploying.
- As configured in these examples, none of the deployed filesystems are persisted. This means that restarting any container will reset its state (think databases and uploaded files).
- Unlike your average web application, Alfalfa is heavily dependent on in-memory state because the underlying models must run in memory. This means that deployments are not resilient to worker restarts; if a worker is restarted while a model is running, that model state will be lost.
Alfalfa includes optional Historian functionality, which is enabled by setting the HISTORIAN_ENABLE=true
environment variable for the worker container and adding two containers to the stack:
- InfluxDB for timeseries data storage
- Grafana for visualization
- Note that in images published to DockerHub, the datasource has been provisioned to connect to InfluxDB using the username and password in the repo's .env file. If you have updated these (as you should) for your deployment, you will also need to manually update them via the Grafana UI, which can be accessed at
<grafana-host>/datasources/edit/1/
- Model time is always treated as UTC, which is also the default for chart display. This means that plot filters like "past 6 hours" are relative to current UTC time.
- The Historian only considers model time and has no concept of the real-world time as simulations run. All plots are of model time.
- FMU time is weird and starts at epoch 0, which translates to start time
1970-01-01 00:00:00
in the Grafana time range selection tool.
This assumes that you have installed Docker and docker-compose and are in the root of this repo.
- update .env file: replace any credentials with real values and modify
VERSION_TAG
if appropriate to a tag available on DockerHub. To build the containers yourself, docker thepull
command and append--build
to theup
commands below. - Without Historian:
docker-compose pull && docker-compose up -d
- With Historian:
HISTORIAN_ENABLE=true && docker-compose -f docker-compose.yml -f docker-compose-historian.yml pull && docker-compose -f docker-compose.yml -f docker-compose-historian.yml up -d
- Visit the web server at http://localhost
- Visit the minio (file upload) server at http://localhost:9000
- Visit the historian (if applicable) at http://localhost:3000
- Clean up with
docker-compose down
ordocker-compose -f docker-compose.yml -f docker-compose-historian.yml down
- Swarm deployments don't load .env, so do that explicitly:
export $(cat .env | xargs)
- Without Historian:
docker stack deploy -c docker-compose.yml alfalfa
- With Historian:
HISTORIAN_ENABLE=true && docker stack deploy -c docker-compose.yml -c docker-compose-historian.yml alfalfa
- Visit the web server at http://localhost
- Visit the minio (file upload) server at http://localhost:9000
- Visit the historian (if applicable) at http://localhost:3000
- Clean up:
docker stack rm alfalfa
Sample yaml files for Kubernetes deployments with and without historian are included in the deploy
directory of this repo. Ingress is not included.
- Web workload on port 80
- Minio workload on port 9000
- If historian enabled, Grafana container on port 3000
In addition to updating usernames and passwords as appropriate, if you are accessing your stack via ingress you will need the following update to env variables for the web container:
- S3_URL should point to the configured host for minio.
- S3_URL_EXTERNAL can be removed.
- Getting Started with Model Measures Part 1: Creating Inputs and Outputs
- Getting Started with Model Measures Part 2: Creating Actuators
- Getting Started with EnergyPlus Measures Part 1: Creating Inputs and Outputs
- Getting Started with EnergyPlus Measures Part 2: Creating Actuators
- How to Configure an OpenStudio Model
- How to Configure Measures for Use with Alfalfa Ruby Gem
- How to Create Inputs and Outputs With Measures
- How to Run URBANopt Output Models in Alfalfa
- How to Migrate EnergyPlus Python Plugins
- How to Integrate Python based Electric Vehicle Models with OpenStudio Workflows
- How to Locally Test OpenStudio Models
- Required Structure of OpenStudio Workflow
- List of Automatically Generated Energyplus Points
- Alfalfa EnergyPlus Mixin Methods
- Getting Started with Uploading and Running a Model Using Python
- Getting Started with Uploading and Running a Model Using the UI
- How to Install Alfalfa Client
- How to Preprocess and Upload a Model
- How to Step Through a Simulation
- How to View Historical Data in Grafana
- How to Configure an Alias
- How to Troubleshoot Models