Skip to content

Commit

Permalink
documentation mash commit
Browse files Browse the repository at this point in the history
  • Loading branch information
consideRatio committed Aug 18, 2018
1 parent 080531c commit ff9ec96
Show file tree
Hide file tree
Showing 25 changed files with 261 additions and 274 deletions.
25 changes: 12 additions & 13 deletions doc-notes.md
Original file line number Diff line number Diff line change
@@ -1,37 +1,36 @@
index

creating your kubernetes cluster:
creating your cluster
zero-gke
creating your cluster OK
zero-gke OK

creating your jupyterhub:
getting started
setting up helm
setting up jupyterhub
turning off jupyterhub and computational resources
Getting Started OK
Setting up Helm OK
Setting up JupyterHub OK
Tearing down everything OK

customization guide
extending your jh setup
applying config changes
Customization guide
Customizing your deployment (extending) OK

customizing user environment
Customizing the User Environment
use an existing docker image
build a custom docker image with repo2docker
user jupyterlab by default
set env variables
pre-populating users home dir

user resources
Customizing User Resources
ser user memory and cpu guarantees/limits
modifying user storage type and size
expanding and contracting the size of your cluster

user storage in jupyterhub
Customizing User Storage
how can this process break down
configuration
torn off per-user persistent storage

user management
Customizing User Management
culling user pods
admin users
authenticating users
2 changes: 1 addition & 1 deletion doc/source/advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -270,7 +270,7 @@ is added to the cluster.
By enabling the **continuous pre-puller** (default state is disabled), the user
image will be pre-pulled when adding a new node. When enabled, the
**continuous pre-puller** runs as a [daemonset](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/)
to force kubernetes to pull the user image on all nodes as soon as a node is
to force Kubernetes to pull the user image on all nodes as soon as a node is
present. The continuous pre-puller uses minimal resources on all nodes and
greatly speeds up the user pod start time.

Expand Down
2 changes: 1 addition & 1 deletion doc/source/amazon/efs_storage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ ElasticFileSystem is distributed file system which speaks the NFS protocol. It

Drawbacks:

* Setting permissions on persistent volumes is not nailed down in the kubernetes spec yet. This adds some complications we will discuss later.
* Setting permissions on persistent volumes is not nailed down in the Kubernetes spec yet. This adds some complications we will discuss later.

* A crafty user may be able to contact the EFS server directly and read other user's files depending on how the system is setup.

Expand Down
14 changes: 8 additions & 6 deletions doc/source/create-k8s-cluster.rst
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
.. _create-k8s-cluster:

Creating your Kubernetes Cluster
=============================
Setup a Kubernetes Cluster
==========================

Kubernetes' documentation describes the many `ways to set up a cluster`_.
Here, we shall provide quick instructions for the most painless and
popular ways of getting setup in various cloud providers and on other
infrastructure. Choose one option and proceed.
Kubernetes' documentation describes the many `ways to set up a cluster
<https://kubernetes.io/docs/setup/pick-right-solution/>`_. We provide quick
instructions for the most painless and popular ways of setting up a Kubernetes
cluster on various cloud providers and on other infrastructure.

Choose one option and proceed.

.. toctree::
:titlesonly:
Expand Down
40 changes: 22 additions & 18 deletions doc/source/extending-jupyterhub.rst
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
.. _extending-jupyterhub:

Extending your JupyterHub setup
===============================
Customizing your Deployment
===========================

The helm chart used to install JupyterHub has a lot of options for you to tweak.
For a semi-complete list of the changes you can apply via your helm-chart,
see the :ref:`helm-chart-configuration-reference`.
The Helm chart used to install your JupyterHub deployment has a lot of options
for you to tweak. For a semi-complete reference list of the options, see the
:ref:`helm-chart-configuration-reference`.

.. _apply-config-changes:

Expand All @@ -14,21 +14,25 @@ Applying configuration changes

The general method to modify your Kubernetes deployment is to:

1. Make a change to your ``config.yaml``
1. Make a change to your ``config.yaml``.

2. Run a ``helm upgrade``:

.. code-block:: bash
.. code-block:: bash
helm upgrade <YOUR_RELEASE_NAME> jupyterhub/jupyterhub --version=0.7 --values config.yaml
helm upgrade <YOUR_RELEASE_NAME> jupyterhub/jupyterhub \
--version=0.7 \
--values config.yaml
Where ``<YOUR_RELEASE_NAME>`` is the parameter you passed to ``--name`` when
`installing jupyterhub <setup-jupyterhub.html#install-jupyterhub>`_ with
``helm install``. If you don't remember it, you can probably find it by doing
``helm list``.
3. Wait for the upgrade to finish, and make sure that when you do
``kubectl --namespace=<YOUR_NAMESPACE> get pod`` the hub and proxy pods are
in ``Ready`` state. Your configuration change has been applied!
Note that ``helm list`` should display ``<YOUR_RELEASE_NAME>`` if you forgot it.

For information about the many things you can customize with changes to
your helm chart, see :ref:`user-environment`, :ref:`user-resources`, and
:ref:`helm-chart-configuration-reference`.
3. Verify that the *hub* and *proxy* pods entered the ``Running`` state after
the upgrade completed.

.. code-block:: bash
kubectl --namespace=<YOUR_NAMESPACE> get pod
For information about the many things you can customize with changes to your
Helm chart through values provided to its templates through ``config.yaml``, see
the :ref:`customization-guide`.
16 changes: 6 additions & 10 deletions doc/source/getting-started.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
.. _getting-started:

Getting started
Getting Started
===============

**JupyterHub** lets you create custom computing environments that users can
Expand All @@ -21,20 +21,16 @@ And may end up gaining experience with:

.. note::

For a more extensive description of the tools and services that JupyterHub
For a more elaborate introduction to the tools and services that JupyterHub
depends upon, see our :ref:`tools` page.


Verify JupyterHub dependencies
------------------------------

At this point, you should have completed *Step Zero* and have an operational
Kubernetes cluster. You will already have a cloud provider/infrastructure and
kubernetes.
Kubernetes cluster made available through a cloud provider/infrastructure. If
not, see :ref:`create-k8s-cluster`.

If you need to create a Kubernetes cluster, see
:ref:`create-k8s-cluster`.

We also depend on Helm and the JupyterHub Helm chart for your JupyterHub
deployment. We'll deploy them in this section. Let's begin by moving on to
:ref:`setup-helm`.
You will use Helm and the JupyterHub Helm chart for your JupyterHub deployment.
Let's get started by moving on to :ref:`setup-helm`.
23 changes: 12 additions & 11 deletions doc/source/google/step-zero-gcp.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,11 @@ your google cloud account.
b. **Use your own computer's terminal:**

1. Download and install the `gcloud` command line tool at its `downloads
page <https://cloud.google.com/sdk/downloads>`_.
page <https://cloud.google.com/sdk/downloads>`_. It will help you
create and communicate with a Kubernetes cluster.

2. Install ``kubectl`` (read *kube control*), it is a tool for controlling
kubernetes. From your terminal, enter:
2. Install ``kubectl`` (reads *kube control*), it is a tool for controlling
Kubernetes clusters in general. From your terminal, enter:

.. code-block:: bash
Expand All @@ -69,8 +70,8 @@ your google cloud account.

A single node from the default node pool created below will be responsible
for running the essential pods of the JupyterHub chart. We recommend choosing
a cheap machine type like `n1-standard-1` initially and upgrade it at a later
stage if it is found to be overburdened.
a cheap machine type like `n1-standard-1` initially and upgrading it at a
later stage if it is found to be overburdened.

See the `node pool documentation
<https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools>`_ for
Expand All @@ -85,8 +86,8 @@ your google cloud account.
--node-labels hub.jupyter.org/node-purpose=core
* ``--machine-type`` specifies the amount of CPU and RAM in each node within
this default node pool. There is a `variety of types <https://cloud.google.com/compute/docs/machine-types>`_
to choose from.
this default node pool. There is a `variety of types
<https://cloud.google.com/compute/docs/machine-types>`_ to choose from.

* ``--num-nodes`` specifies how many nodes to spin up.

Expand All @@ -97,14 +98,14 @@ your google cloud account.
means that the amount of nodes is automatically adjusted along with the
amount of users scheduled.

The `n1-standard-2` machine type has 2CPUs and 7.5G of RAM each of which
about 0.2 CPU will be requested by system pods. It is a suitable choice for
a free account that has a limit on a total of 8 CPU cores.
The `n1-standard-2` machine type has 2 CPUs and 7.5 GB of RAM each of which
about 0.2 CPU will be requested by system pods. It is a suitable choice for a
free account that has a limit on a total of 8 CPU cores.

Note that the node pool is *tainted*. Only user pods that is configured
with a *toleration* for this taint can schedule on the node pool's nodes.
This is done in order to ensure the autoscaler will be able to scale down
when the users have left.
when the user pods have stopped.

.. code-block:: bash
Expand Down
12 changes: 6 additions & 6 deletions doc/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,22 +21,22 @@ page`_.

.. _getting-to-zero:

Creating your Kubernetes cluster
---------------------------------------------
Setup a Kubernetes cluster
--------------------------

This section describes a Kubernetes cluster and outlines how to complete *Step Zero: your Kubernetes cluster* for
different cloud providers and infrastructure.

.. toctree::
:titlesonly:
:caption: Creating your Kubernetes cluster
:caption: Setup a Kubernetes cluster

create-k8s-cluster

.. _creating-your-jupyterhub:

Creating your JupyterHub
------------------------
Setup JupyterHub
----------------

This tutorial starts from *Step Zero: your Kubernetes cluster* and describes the
steps needed for you to create a complete initial JupyterHub deployment.
Expand All @@ -45,7 +45,7 @@ an initial deployment.

.. toctree::
:maxdepth: 1
:caption: Creating your JupyterHub
:caption: Setup JupyterHub

getting-started
setup-helm
Expand Down
4 changes: 2 additions & 2 deletions doc/source/microsoft/step-zero-azure.rst
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ Step Zero: Kubernetes on Microsoft Azure Container Service (AKS)
* ``--name`` is the name you want to use to refer to your cluster
* ``--resource-group`` is the ResourceGroup you created in step 4
* ``--ssh-key-value`` is the ssh public key created in step 7
* ``--node-count`` is the number of nodes you want in your kubernetes cluster
* ``--node-count`` is the number of nodes you want in your Kubernetes cluster
* ``--node-vm-size`` is the size of the nodes you want to use, which varies based on
what you are using your cluster for and how much RAM/CPU each of your users need.
There is a `list of all possible node sizes <https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-sizes-specs>`_
Expand Down Expand Up @@ -188,7 +188,7 @@ Step Zero: Kubernetes on Microsoft Azure Container Service (AKS)
kubectl get node
The response should list three running nodes and their kubernetes versions!
The response should list three running nodes and their Kubernetes versions!
Each node should have the status of ``Ready``, note that this may take a
few moments.

Expand Down
2 changes: 1 addition & 1 deletion doc/source/optimization.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ container image will be pre-pulled when a new node is added. New nodes can for
example be added manually or by a cluster autoscaler. The **continuous
pre-puller** uses a
[daemonset](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/)
to force kubernetes to pull the user image on all nodes as soon as a node is
to force Kubernetes to pull the user image on all nodes as soon as a node is
present. The continuous pre-puller uses minimal resources on all nodes and
greatly speeds up the user pod start time.
Expand Down
15 changes: 9 additions & 6 deletions doc/source/reference.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,14 @@ DO NOT EDIT THIS LINE. This file is used in `conf.py to generate schema.md`. Edi
.. _helm-chart-configuration-reference:
```

# Helm Chart Configuration Reference
# Configuration Reference

The [JupyterHub helm chart](https://github.com/jupyterhub/zero-to-jupyterhub-k8s) is configurable so that you can customize your JupyterHub setup however you'd like. You can extend user resources, build off of different Docker images, manage security and authentication, and more.
The [JupyterHub Helm chart](https://github.com/jupyterhub/zero-to-jupyterhub-k8s)
is configurable by values in `config.yaml`. This means that you can customize
your JupyterHub deployment in many ways. You can extend user resources, build
off of different Docker images, manage security and authentication, and more.

Below is a description of the fields that are exposed with the JupyterHub helm chart.
For more guided information about some specific things you can do with
modifications to the helm chart, see the [extending jupyterhub](extending-jupyterhub.html)
and [user environment](user-environment.html) pages.
Below is a description of the fields that are exposed with the JupyterHub helm
chart. For more guided information about some specific things you can do with
modifications to the helm chart, see the [extending jupyterhub]
(extending-jupyterhub.html) and [user environment](user-environment.html) pages.
2 changes: 1 addition & 1 deletion doc/source/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ users to grant themselves more privileges, access other users' content without
permission, run (unprofitable) bitcoin mining operations & various other
not-legitimate activities. By default, we do not allow access to the [service
account credentials](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) needed
to access the kubernetes API from user servers for this reason.
to access the Kubernetes API from user servers for this reason.

If you want to (carefully!) give access to the Kubernetes API to your users, you
can do so with the following in your `config.yaml`:
Expand Down
26 changes: 12 additions & 14 deletions doc/source/setup-helm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,14 @@ Setting up Helm

`Helm <https://helm.sh/>`_, the package manager for Kubernetes, is a useful tool
for: installing, upgrading and managing applications on a Kubernetes cluster.
The Helm packages are called *charts*. We will be install and manage JupyterHub
on our kubernetes cluster with a Helm chart.
Helm packages are called *charts*. We will be install and manage JupyterHub on
our Kubernetes cluster with a Helm chart.

Helm has two parts: a client (`helm`) and a server (`tiller`). Tiller runs
inside of your Kubernetes cluster as a pod in the kube-system namespace and
inside of your Kubernetes cluster as a pod in the kube-system namespace. Tiller
manages *releases* (installations) and *revisions* (versions) of charts deployed
on the kubernetes cluster. When you run `helm` commands, your local Helm client
sends instructions to `tiller` in the cluster that in turn make the requested
changes.
on the cluster. When you run `helm` commands, your local Helm client sends
instructions to `tiller` in the cluster that in turn make the requested changes.

Installation
------------
Expand All @@ -25,8 +24,9 @@ terminal:
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
`Alternative methods for helm installation <https://github.com/kubernetes/helm/blob/master/docs/install.md>`_
exist if you prefer to install without using the script.
`Alternative methods for helm installation
<https://github.com/kubernetes/helm/blob/master/docs/install.md>`_ exist if you
prefer or need to install without using the script.

.. _helm-rbac:

Expand All @@ -50,14 +50,13 @@ cluster:
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
See the `RBAC documentation
<security.html#use-role-based-access-control-rbac>`_ for more
information.
See `our RBAC documentation
<security.html#use-role-based-access-control-rbac>`_ for more information.

.. note::

While most clusters have RBAC enabled and you need this line, you **must**
skip this step if your kubernetes cluster does not have RBAC enabled.
skip this step if your Kubernetes cluster does not have RBAC enabled.

3. Initialize `helm` and `tiller`.

Expand Down Expand Up @@ -120,5 +119,4 @@ Ensure that `tiller is secure <https://engineering.bitnami.com/articles/helm-sec
Next Step
---------

Congratulations. Helm is now set up. The next step is to :ref:`install
JupyterHub <setup-jupyterhub>`!
Congratulations, Helm is now set up! Let's continue with :ref:`setup-jupyterhub`!
Loading

0 comments on commit ff9ec96

Please sign in to comment.