Skip to content

Container as a Service using VMSS and Docker Swarm (mode)

Notifications You must be signed in to change notification settings

kbhattmsft/autoscaling

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Autoscaling

Container as a Service using Azure VM Scale Sets and Docker Swarm (mode)

A solution which scales both at the container and the IaaS levels for providing true compute elasticity, on-demand. Ideal for deploying workloads with variable footprint. Comes with an example monitoring stack.

  • Realized using Azure VM Scale Sets
  • Linux Diagnostics extension used for getting the guest VM metrics used for triggering scaling in/out based on CPU and Memory use.
  • CPU bound tested with stress tool encapsulated as a docker image
  • Azure deployment jsons created with acs-engine chosing DockerCE (Swarm Mode) orchestrator.
  • Grafana dashboard json in /grafana directory.
  • Docker experimental mode enabled before installing monitoring stack.
  • All scripts in /scripts directory.
    • cputest (cputest.sh) is itself deployed as a swarm mode service in the global mode.
    • clean_swarm.sh utility for cleaning up "Down" nodes from swarm master after cluster scales in.
    • Additional container allocation visualization tool can be deployed using setup_visualizer.sh script. This needs to be run on the master on the "local" docker daemon bound to the docker0 eth interface.
    • deploy_monitoring.sh for deploying the monitoring stack consisting of the following components:

Architecture diagram is as follows:

Sample dashboard looks like following when cputest is running, demonstrating scaling out of the smarm mode cluster automatically and scaling in when cputest is stopped:

Deploy and Visualize

Deploy to Azure

MSFT OSCC

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments. All credit goes to contributors of the individual components used in this project, where applicable.