I originally started this project a while back with a goal of taking the 2016 NYC Benchmarking Law data about building energy usage and do something interesting with it. After a few iterations I thought it might be interesting to see if I could predict the emission of green house gases from buildings by looking at their age, and water consumption as well as other energy consumption metrics. In the end the point of this project was to build and deploy a model on the cloud using a real world dataset with outliers and missing values using state of the art tools such as,
In this first blogpost I will cover how to perform the basics of data cleaning including:
- Exploratory data analysis
- Identifying and removing outliers
In indentifying outliers I will cover both visual inspection as well a machine learning method called Isolation Forests. Since I will completing this project over multiple days and using Google Cloud, I will go over the basics of using BigQuery for storing the datasets so I won't have to start all over again each time I work on it. At the end of this blogpost I will summarize the findings, and give some specific recommendations to reduce mulitfamily and office building energy usage.
In this second post I cover imputations techniques for missing data using Scikit-Learn's impute module using both point estimates (i.e. mean, median) using the SimpleImputer class as well as more complicated regression models (i.e. KNN) using the IterativeImputer class. The later requires that the features in the model are correlated. This is indeed the case for our dataset and in our particular case we also need to transform the feautres in order to discern a more meaningful and predictive relationship between them. As we will see, the transformation of the features also gives us much better results for imputing missing values.
This last post will deal with model building and model deployment. Specifically I will build a model of New York City building green house gas emissions based on the building energy usage metrics. After I build a sufficiently accurate model I will convert the model to REST API for serving and then deploy the REST API to the cloud. The processes of model development and deployment are made a lot easier with MLflow library. Specifically, I will cover using the MLflow Tracking framework to log all the diffent models I developed as well as their performance. MLflow tracking acts a great way to memorialize and document the development process. I will then use MLflow Models to convert the selected model into a REST API for model servin and show how to the API to the cloud using Docker and Google App Engine.
You can install the dependencies and access the first two notebook (GreenBuildings1
& (GreenBuildings2
) using Docker by building the Docker image with the following:
docker build -t greenbuildings .
Followed by running the command container:
docker run -ip 8888:8888 -v `pwd`:/home/jovyan -t greenbuildings
See here for more info. Otherwise without Docker, make sure to use Python 3.7 and install GeoPandas (0.3.0) using Conda as well as the additional libraries listed in requirements.txt
. These can be installed with the command,
pip install -r requirements.txt
The last notebook (GreenBuildings3
) I ran locally on my machine with the dependencies in requirements.txt
.
The NYC Benchmarking Law requires owners of large buildings to annually measure their energy and water consumption in a process called benchmarking. The law standardizes this process by requiring building owners to enter their annual energy and water use in the U.S. Environmental Protection Agency's (EPA) online tool, ENERGY STAR Portfolio Manager® and use the tool to submit data to the City. This data gives building owners about a building's energy and water consumption compared to similar buildings, and tracks progress year over year to help in energy efficiency planning.
I used the 2016 Benchmarking data which is disclosed publicly and can be found here.