pman, which once stood for process manager, is a Flask application providing an API for creating jobs with various schedulers e.g. Kubernetes, Podman, Docker Swarm, and SLURM. It basically translates its own JSON interface to requests for the various supported backends.
pman is tightly-coupled to pfcon. pman and pfcon are typically deployed as a pair, providing the pfcon service.
The easiest way to see it in action is to run miniChRIS-docker. The instructions that follow are for pman hackers and developers.
This section describes how to set up a local instance of pman for development.
These instructions run pman inside a container using Docker and Docker Swarm for scheduling jobs. Hot-reloading of changes to the code is enabled.
docker swarm init --advertise-addr 127.0.0.1
docker compose up -d
To run a full test using docker stack deploy
,
run the test harness test_swarm.sh
.
./test_swarm.sh
pman must be able to schedule containers via Podman by communicating to the Podman socket.
systemctl --user start podman.service
export DOCKER_HOST="$(podman info --format '{{ .Host.RemoteSocket.Path }}')"
python -m venv venv
source venv/bin/activate
pip install -r requirements/local.txt
pip install -e .
python -m pman
https://github.com/FNNDSC/pman/wiki/Development-Environment:-Kubernetes
pman is configured by environment variables. Refer to the source code in pman/config.py for exactly how it works.
pman relies on pfcon to manage data in a directory known as "storeBase." The "storeBase" is a storage space visible to every node in your cluster.
For single-machine deployments using Docker and Podman, the best solution
is to use a local volume mounted by pfcon at /var/local/storeBase
.
pman should be configured with STORAGE_TYPE=docker_local_volume
VOLUME_NAME=...
.
On Kubernetes, a single PersistentVolumeClaim should be used. It is mounted
by pfcon at /var/local/storeBase
.
pman should be configured with STORAGE_TYPE=kubernetes_pvc
VOLUME_NAME=...
.
SLURM has no concept of volumes, though SLURM clusters typically use a NFS share
mounted to the same path on every node. pman should be configured with
STORAGE_TYPE=host
STOREBASE=...
, specify the share mount point as STOREBASE
.
Originally, pman interfaced with the Docker Swarm API for the sake of supporting multi-node clusters.
However, more often than not, pman is run on a single-machine. Such is the case for developer
environments, "host" compute resources for our single-machine production deployments of CUBE,
and production deployments of CUBE on our Power9 supercomputers. Swarm mode is mostly an annoyance
and its multi-node ability is poorly tested. Furthermore, multi-node functionality is
better provided by CONTAINER_ENV=kubernetes
.
In pman v4.1, CONTAINER_ENV=docker
was introduced as a new feature and the default configuration.
In this mode, pman uses the Docker Engine API instead of the Swarm API, which is much more convenient
for single-machine use cases.
CONTAINER_ENV=docker
is compatible with Podman.
Podman version 3 or 4 are known to work.
Configure the user to be able to set resource limits.
https://github.com/containers/podman/blob/main/troubleshooting.md#symptom-23
Environment Variable | Description |
---|---|
SECRET_KEY |
Flask secret key |
CONTAINER_ENV |
one of: "swarm", "kubernetes", "cromwell", "docker" |
STORAGE_TYPE |
one of: "host", "docker_local_volume", "kubernetes_pvc" |
STOREBASE |
where job data is stored, valid when STORAGE_TYPE=host , conflicts with VOLUME_NAME |
VOLUME_NAME |
name of data volume, valid when STORAGE_TYPE=docker_local_volume or STORAGE_TYPE=kubernetes_pvc |
PFCON_SELECTOR |
label on the pfcon container, may be specified for pman to self-discover VOLUME_NAME (default: org.chrisproject.role=pfcon ) |
CONTAINER_USER |
Set job container user in the form UID:GID , may be a range for random values |
ENABLE_HOME_WORKAROUND |
If set to "yes" then set job environment variable HOME=/tmp |
JOB_LABELS |
CSV list of key=value pairs, labels to apply to container jobs |
JOB_LOGS_TAIL |
(int) maximum size of job logs |
IGNORE_LIMITS |
If set to "yes" then do not set resource limits on container jobs (for making things work without effort) |
REMOVE_JOBS |
If set to "no" then pman will not delete jobs (for debugging) |
When STORAGE_TYPE=host
, then specify STOREBASE
as a mount point path on the host(s).
For single-machine instances, use a Docker/Podman local volume as the "storeBase." The volume should exist prior to the start of pman. It can be identified one of two ways:
- Manually, by passing the volume name to the variable
VOLUME_NAME
- Automatically: pman inspects a container with the label
org.chrisproject.role=pfcon
and selects the mountpoint of the bind to/var/local/storeBase
When STORAGE_TYPE=kubernetes_pvc
, then VOLUME_NAME
must be the name of a
PersistentVolumeClaim configured as ReadWriteMany.
In cases where the volume is only writable to a specific UNIX user,
such as a NFS-backed volume, CONTAINER_USER
can be used as a workaround.
Applicable when CONTAINER_ENV=kubernetes
Environment Variable | Description |
---|---|
JOB_NAMESPACE |
Kubernetes namespace for created jobs |
NODE_SELECTOR |
Pod nodeSelector |
Applicable when CONTAINER_ENV=cromwell
Environment Variable | Description |
---|---|
CROMWELL_URL |
Cromwell URL |
TIMELIMIT_MINUTES |
SLURM job time limit |
For how it works, see https://github.com/FNNDSC/pman/wiki/Cromwell
Setting an arbitrary container user, e.g. with CONTAINER_USER=123456:123456
,
increases security but will cause (unsafely written) ChRIS plugins to fail.
In some cases, ENABLE_HOME_WORKAROUND=yes
can get the plugin to work
without having to change its code.
It is possible to use a random container user with CONTAINER_USER=1000000000-2147483647:1000000000-2147483647
however considering that pfcon's UID never changes, this will cause everything to break.
pman's configuration has gotten messy over the years because it attempts to provide an interface across vastly different systems. Some mixing-and-matching of options are unsupported:
IGNORE_LIMITS=yes
only works withCONTAINER_ENV=docker
(or podman).JOB_LABELS=...
only works withCONTAINER_ENV=docker
(or podman) andCONTAINER_ENV=kubernetes
.CONTAINER_USER
does not work withCONTAINER_ENV=cromwell
CONTAINER_ENV=cromwell
does not forward environment variables.STORAGE_TYPE=host
is not supported for Kubernetes
- Dev environment and testing for Kubernetes and SLURM.