Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build spacktainers on codebuild #6

Merged
merged 129 commits into from
Dec 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
129 commits
Select commit Hold shift + click to select a range
3c30ead
First attempt at running something on codebuild
heerener Sep 25, 2024
ff6d0c2
Some more commands
heerener Sep 25, 2024
286758c
An extra echo
heerener Sep 27, 2024
972a9fa
Configure mirror
heerener Sep 27, 2024
be43b0f
Full path to spack command
heerener Sep 27, 2024
c9b5b17
Bugfix
heerener Sep 27, 2024
dc8d232
Remove helloworld job
heerener Sep 27, 2024
647c33c
Serets as input
heerener Sep 27, 2024
a34d619
Second step
heerener Sep 30, 2024
de1fa48
Try to build base containers
heerener Oct 1, 2024
c92005e
Install awscli
heerener Oct 1, 2024
35fbeee
Try running on a curated amazon image
heerener Oct 1, 2024
02165a8
Reduce, reduce, reduce
heerener Oct 1, 2024
b77a257
Now try ubuntu
heerener Oct 1, 2024
080fe48
restore
heerener Oct 1, 2024
afc61aa
Amazon Q has ... suggestions
heerener Oct 1, 2024
8552127
Let's try like this again
heerener Oct 1, 2024
4e432a5
Now add a tag
heerener Oct 1, 2024
68a13f2
More Amazon Q suggestions
heerener Oct 1, 2024
1d2d6df
Let's try to reverse it
heerener Oct 1, 2024
9ede4fc
Fine, like this then
heerener Oct 1, 2024
0d275c1
Let's do an experimenting workflow for now
heerener Oct 2, 2024
47f4144
ecr-container: without LINUX_IMAGE prefix?
heerener Oct 2, 2024
0b63c0b
Change default container
heerener Oct 2, 2024
5aeddc5
Retry
heerener Oct 2, 2024
6f75148
Disable test build again
heerener Oct 2, 2024
19b2bd6
Run base build on ubuntu, define job dependency
heerener Oct 2, 2024
ae095c9
Use the correct secret name
heerener Oct 2, 2024
ad813fd
More output
heerener Oct 2, 2024
ef753cc
BUILDAH_EXTRA_ARGS syntax change
heerener Oct 2, 2024
9fdc2fd
Actual spack branch, not commit sha
heerener Oct 2, 2024
668df28
No bluebrain files in spackspack
heerener Oct 2, 2024
e6cd3e4
Write public deployment key
heerener Oct 3, 2024
1c9477f
Build runtime like builder
heerener Oct 3, 2024
e2c8012
First attempt at composite action
heerener Oct 3, 2024
b08e9ac
Now actually try to call the composite action
heerener Oct 3, 2024
f3ddf96
Apparently github is picky about the filename here
heerener Oct 3, 2024
6554e00
With an input
heerener Oct 3, 2024
fdc6662
For real now: builder and runtime with a composite action
heerener Oct 3, 2024
2d25e45
Forgot an input
heerener Oct 3, 2024
0b74831
That was not an input
heerener Oct 3, 2024
af5644a
Docker hub authentication
heerener Oct 3, 2024
81e2270
Bump to Ubuntu 24.04, include BlueBrain repos under generic name.
matz-e Oct 3, 2024
a586327
Syntax fix
heerener Oct 3, 2024
dc18174
Attempt to parametrize runs-on
heerener Oct 7, 2024
d646bd0
Now try running on the runtime container
heerener Oct 7, 2024
84ae1ff
Properly define needs
heerener Oct 7, 2024
863471e
Hardcoded attempt to run on runtime image
heerener Oct 7, 2024
de5f4eb
Remove the LINUX_IMAGE prefix
heerener Oct 7, 2024
3f9570b
Move the comment out of the way
heerener Oct 7, 2024
2333a42
Re-enable base container jobs, go back to not hardcoded runs-on
heerener Oct 7, 2024
be949a2
builder/Dockerfile: formatting and some debug commands
heerener Oct 7, 2024
7576f77
Restore `needs` relationship
heerener Oct 7, 2024
97becfc
Remove debug info
heerener Oct 7, 2024
e1639bf
Latest changes to get things building / uploading smoother
matz-e Oct 7, 2024
f9f9642
First attempt to build touchdetector
heerener Oct 7, 2024
4b6dc98
Rename reusable action
heerener Oct 7, 2024
13539d4
Fix typo
heerener Oct 7, 2024
b0f98d1
No need to build on the builder image
heerener Oct 7, 2024
91bf4e9
Spack private deployment key
heerener Oct 7, 2024
2af5c16
Variables from the secrets context
heerener Oct 7, 2024
122f2d0
Debugging
heerener Oct 7, 2024
ac06f1b
Which one is it?
heerener Oct 7, 2024
be90c86
Too many brackets
heerener Oct 7, 2024
519a9c1
Correct touchdetector path
heerener Oct 8, 2024
897ca5e
Clone the repo first
heerener Oct 8, 2024
0146e9e
Formatting
heerener Oct 8, 2024
3c95214
No spaces in curly braces
heerener Oct 8, 2024
8c0f4b5
Don't clone again
heerener Oct 8, 2024
0617ed2
Split builder and runtime container jobs
heerener Oct 8, 2024
7eacb3d
Debug output
heerener Oct 8, 2024
f6ecd12
Auto-formatting doesn't split lines very well
heerener Oct 8, 2024
90fbcb7
Another attempt at quoting
heerener Oct 8, 2024
5616703
More quoting attempts
heerener Oct 8, 2024
b155d1b
Correct argument name
heerener Oct 8, 2024
82d06db
Tune builder Dockerfile to set view.
matz-e Oct 8, 2024
6c91dd3
Empty
heerener Oct 8, 2024
a80ba2c
Spack config and repos in ONBUILD instructions
heerener Oct 8, 2024
26590b5
No more need to copy packages.yaml
heerener Oct 8, 2024
acf7310
Debug info
heerener Oct 8, 2024
84134b3
ONBUILD ARG REPOS_BRANCH
heerener Oct 8, 2024
a0e0760
Try to build py-brain-indexer+mpi
heerener Oct 9, 2024
9e52f6c
Try multiple containers through the power of matrices!
heerener Oct 9, 2024
dee91fc
Use a large image for the actual spacktainer build
heerener Oct 9, 2024
a5a70ff
Don't fail the workflow for one failed container build
heerener Oct 9, 2024
b916a5f
Bigger!
heerener Oct 14, 2024
7ac6a33
Quick test
heerener Oct 15, 2024
6f7a99f
Restore the actual spacktainer workflows
heerener Oct 15, 2024
97cd377
BSD-458: remove image from runs-on
heerener Oct 16, 2024
3170013
BSD-458: update for tf codebuild project
heerener Oct 17, 2024
53db76c
Empty
heerener Oct 17, 2024
bed9240
BSD-458: apt-get update before apt-get install
heerener Oct 17, 2024
7273b94
Don't override image size
heerener Oct 17, 2024
74b94cc
py-brain-indexer: build numpy-quaternion without numba.
matz-e Nov 5, 2024
013a5fd
Bump Spack to include patched packages.
matz-e Nov 17, 2024
fc57959
Prefer MPICH.
matz-e Nov 17, 2024
2b3d1c9
Add neurodamus neocortex containre
matz-e Nov 17, 2024
dc55e26
Integrade Neurondamus containre, saner containre naming
matz-e Nov 17, 2024
2a4043b
Document local builds!
matz-e Nov 17, 2024
2cf576b
Tune the Spack
matz-e Nov 20, 2024
9bf40fe
builder: fix repos branch
matz-e Nov 20, 2024
3ffbf56
Typo
matz-e Nov 20, 2024
8ff5e9f
Simplify readme, add functionalizer.
matz-e Nov 21, 2024
e5bec0b
bump
matz-e Nov 21, 2024
182abc1
bump
matz-e Nov 21, 2024
c28ee5d
bump
matz-e Nov 21, 2024
241c503
Add more stuff.
matz-e Nov 21, 2024
24000cb
Add more build jobs
matz-e Nov 21, 2024
cc422a7
Don't build the brainz
matz-e Nov 22, 2024
28e31e4
Re-enable Bríon
matz-e Nov 22, 2024
bae0aa0
bump
matz-e Nov 22, 2024
8a44484
Add connectome-manipulator
matz-e Nov 22, 2024
2f216c7
Make builder workable stand-alone.
matz-e Nov 22, 2024
559fed5
Try to make connectome-manipulator work
matz-e Nov 23, 2024
1151301
More connectome-manipulator work
matz-e Nov 23, 2024
af2d9b7
More README updates.
matz-e Nov 25, 2024
61e0dde
Add multiscale-run containre
matz-e Dec 2, 2024
f3e110a
Forgot the action parameter, as usual 🤷
matz-e Dec 2, 2024
1e1316e
Not hyper
matz-e Dec 2, 2024
9d3d248
Remove test workflow / action
heerener Dec 3, 2024
5c0186b
Add system-benchmarks container
heerener Dec 3, 2024
8505075
No me gusta el SuperLU
matz-e Dec 3, 2024
d486724
superlu-dist is superlu's secret identity
heerener Dec 3, 2024
6af9d76
Add system-benchmarks container
heerener Dec 3, 2024
38b7334
Have spack dump error logs.
matz-e Dec 3, 2024
e05f8af
Attempt to downgrade 💀✊☁️ petsc
matz-e Dec 4, 2024
fa12408
Bump neurodamus in the neocortex container.
matz-e Dec 4, 2024
98a633b
Attempt to push to GHCR
matz-e Dec 4, 2024
9dc7475
Make things more complicated for GHCR
matz-e Dec 4, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 70 additions & 0 deletions .github/actions/build_container/action.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
---
name: Build Base Container
description: Build runtime / builder depending on variables
inputs:
AWS_ECR_URL:
description: Base URL for AWS ECR where our containers live
required: true
AWS_ECR_PATH:
description: path under the AWS ECR where this specific container lives
required: true
AWS_ACCESS_KEY_ID:
description: Access key ID for AWS
required: true
AWS_SECRET_ACCESS_KEY:
description: Secret access key for AWS
required: true
BUILD_PATH:
description: Under which directory in this repo the Dockerfile lives
required: true
BUILDAH_EXTRA_ARGS:
description: Extra args to pass to buildah
required: false
default: ''
DOCKERHUB_USER:
description: Username for Dockerhub authentication
required: true
DOCKERHUB_PASSWORD:
description: Password for Dockerhub authentication
required: true
GHCR_USER:
description: Username for GHCR authentication
required: true
GHCR_TOKEN:
description: Token for GHCR authentication
required: true
SPACK_DEPLOYMENT_KEY_PUB:
description: Public key for spack deployments
required: true
SPACK_DEPLOYMENT_KEY_PRIVATE:
description: Private key for spack deployments
required: true
runs:
using: composite
steps:
- name: create builder
shell: bash
run: |-
echo "Building container in ${{ inputs.BUILD_PATH }}"
set -x
apt-get update
apt-get install -y awscli buildah podman
export STORAGE_DRIVER=vfs # allows to build inside containers without additional mounts
export BUILDAH_FORMAT=docker # enables ONBUILD instructions which are not OCI compatible
export REGISTRY_IMAGE_TAG=latest # for now
echo "${{ inputs.SPACK_DEPLOYMENT_KEY_PUB }}" > ${{ inputs.BUILD_PATH }}/key.pub
echo "${{ inputs.SPACK_DEPLOYMENT_KEY_PRIVATE }}" > ${{ inputs.BUILD_PATH }}/key
aws ecr get-login-password --region us-east-1 | buildah login --username AWS --password-stdin ${{ inputs.AWS_ECR_URL }}
buildah login --username ${{ inputs.DOCKERHUB_USER }} --password ${{ inputs.DOCKERHUB_PASSWORD }} docker.io
buildah login --username ${{ inputs.GHCR_USER }} --password ${{ inputs.GHCR_TOKEN }} ghcr.io
# This is written like that in case $BUILDAH_EXTRA_ARGS has args that require spaces,
# which is tricky with shell variable expansion. Similar to Kaniko, see also:
# https://github.com/GoogleContainerTools/kaniko/issues/1803
export IFS=''
COMMAND="buildah bud --iidfile image_id ${{ inputs.BUILDAH_EXTRA_ARGS }} ${{ inputs.BUILD_PATH }}"
echo "${COMMAND}"
eval "${COMMAND}"
# Sometimes buildah push fails on the first attempt
buildah push $(<image_id) "docker://${{ inputs.AWS_ECR_URL }}${{ inputs.AWS_ECR_PATH }}:${REGISTRY_IMAGE_TAG}" || sleep 10; buildah push $(<image_id) "docker://${{ inputs.AWS_ECR_URL }}${{ inputs.AWS_ECR_PATH }}:${REGISTRY_IMAGE_TAG}"
# Also push to ghcr
buildah push $(<image_id) "docker://ghcr.io${{ inputs.GHCR_PATH }}:${REGISTRY_IMAGE_TAG}" || sleep 10; buildah push $(<image_id) "docker://ghcr.io${{ inputs.GHCR_PATH }}:${REGISTRY_IMAGE_TAG}"
121 changes: 121 additions & 0 deletions .github/workflows/spacktainer.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
---
name: Build Spacktainers
on: [push]
jobs:
builder-container-job:
runs-on:
- codebuild-spacktainers-tf-${{ github.run_id }}-${{ github.run_attempt }}
- instance-size:small
steps:
- name: clone repo
uses: actions/checkout@v4
- name: create builder
uses: ./.github/actions/build_container
with:
AWS_ECR_URL: ${{ secrets.AWS_ECR_URL }}
AWS_ECR_PATH: /spacktainers/builder
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ECR_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_ECR_SECRET_ACCESS_KEY }}
GHCR_USER: ${{ secrets.GHCR_USER }}
GHCR_TOKEN: ${{ secrets.GHCR_TOKEN }}
GHCR_PATH: /bluebrain/spack-builder
BUILD_PATH: builder
BUILDAH_EXTRA_ARGS: --label org.opencontainers.image.revision="$GITHUB_SHA"
--label org.opencontainers.image.authors="$GITHUB_TRIGGERING_ACTOR" --label
org.opencontainers.image.url="https://github.com/${GITHUB_REPOSITORY}"
--label org.opencontainers.image.source="https://github.com/${GITHUB_REPOSITORY}"
--label ch.epfl.bbpgitlab.ci-pipeline-url="$GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID"
--label ch.epfl.bbpgitlab.ci-commit-branch="$GITHUB_REF_NAME" --build-arg
SPACK_BRANCH=develop
# ' --label org.opencontainers.image.created="$CI_JOB_STARTED_AT"'
DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }}
DOCKERHUB_PASSWORD: ${{ secrets.DOCKERHUB_PASSWORD }}
SPACK_DEPLOYMENT_KEY_PUB: ${{ secrets.SPACK_DEPLOYMENT_KEY_PUB }}
SPACK_DEPLOYMENT_KEY_PRIVATE: ${{ secrets.SPACK_DEPLOYMENT_KEY_PRIVATE }}
runtime-container-job:
runs-on:
- codebuild-spacktainers-tf-${{ github.run_id }}-${{ github.run_attempt }}
- instance-size:small
steps:
- name: clone repo
uses: actions/checkout@v4
- name: create runtime
uses: ./.github/actions/build_container
with:
AWS_ECR_URL: ${{ secrets.AWS_ECR_URL }}
AWS_ECR_PATH: /spacktainers/runtime
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ECR_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_ECR_SECRET_ACCESS_KEY }}
GHCR_USER: ${{ secrets.GHCR_USER }}
GHCR_TOKEN: ${{ secrets.GHCR_TOKEN }}
GHCR_PATH: /bluebrain/spack-runtime
BUILD_PATH: runtime
BUILDAH_EXTRA_ARGS: --label org.opencontainers.image.revision="$GITHUB_SHA"
--label org.opencontainers.image.authors="$GITHUB_TRIGGERING_ACTOR" --label
org.opencontainers.image.url="https://github.com/${GITHUB_REPOSITORY}"
--label org.opencontainers.image.source="https://github.com/${GITHUB_REPOSITORY}"
--label ch.epfl.bbpgitlab.ci-pipeline-url="$GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID"
--label ch.epfl.bbpgitlab.ci-commit-branch="$GITHUB_REF_NAME" --build-arg
SPACK_BRANCH=develop
# ' --label org.opencontainers.image.created="$CI_JOB_STARTED_AT"'
DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }}
DOCKERHUB_PASSWORD: ${{ secrets.DOCKERHUB_PASSWORD }}
SPACK_DEPLOYMENT_KEY_PUB: ${{ secrets.SPACK_DEPLOYMENT_KEY_PUB }}
SPACK_DEPLOYMENT_KEY_PRIVATE: ${{ secrets.SPACK_DEPLOYMENT_KEY_PRIVATE }}
spacktainer-build-job:
strategy:
matrix:
spacktainer:
- appositionizer
- brain-indexer
- brayns
- connectome-manipulator
- functionalizer
- multiscale-run
- neurodamus-hippocampus
- neurodamus-neocortex
- system-benchmarks
runs-on:
- codebuild-spacktainers-tf-${{ github.run_id }}-${{ github.run_attempt }}
continue-on-error: true
needs: [builder-container-job, runtime-container-job]
steps:
- name: clone repo
uses: actions/checkout@v4
- name: prepare to build container
env:
AWS_ECR_URL: ${{ secrets.AWS_ECR_URL }}
run: |-
cd container_definitions/amd64/${{ matrix.spacktainer }}
cat << EOF > Dockerfile
FROM ${AWS_ECR_URL}/spacktainers/builder:latest AS builder
FROM ${AWS_ECR_URL}/spacktainers/runtime:latest

# Triggers building the 'builder' image, otherwise it is optimized away
COPY --from=builder /etc/debian_version /etc/debian_version
EOF
- name: build ${{ matrix.spacktainer }}
uses: ./.github/actions/build_container
with:
AWS_ECR_URL: ${{ secrets.AWS_ECR_URL }}
AWS_ECR_PATH: /spacktainers/${{ matrix.spacktainer }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ECR_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_ECR_SECRET_ACCESS_KEY }}
GHCR_USER: ${{ secrets.GHCR_USER }}
GHCR_TOKEN: ${{ secrets.GHCR_TOKEN }}
GHCR_PATH: /bluebrain/spack-${{ matrix.spacktainer }}
BUILD_PATH: container_definitions/amd64/${{ matrix.spacktainer }}
BUILDAH_EXTRA_ARGS: --label org.opencontainers.image.revision="$GITHUB_SHA"
--label org.opencontainers.image.authors="$GITHUB_TRIGGERING_ACTOR" --label
org.opencontainers.image.url="https://github.com/${GITHUB_REPOSITORY}"
--label org.opencontainers.image.source="https://github.com/${GITHUB_REPOSITORY}"
--label ch.epfl.bbpgitlab.ci-pipeline-url="$GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID"
--label ch.epfl.bbpgitlab.ci-commit-branch="$GITHUB_REF_NAME" --build-arg
SPACK_BRANCH=develop --build-arg CACHE_BUCKET=${{ secrets.AWS_CACHE_BUCKET }}
--build-arg MIRROR_AUTH_ARG="\"--s3-access-key-id='${{ secrets.AWS_CACHE_ACCESS_KEY_ID }}
--s3-access-key-secret=${{ secrets.AWS_CACHE_SECRET_ACCESS_KEY }}'\""
# ' --label org.opencontainers.image.created="$CI_JOB_STARTED_AT"'
DOCKERHUB_USER: ${{ secrets.DOCKERHUB_USER }}
DOCKERHUB_PASSWORD: ${{ secrets.DOCKERHUB_PASSWORD }}
SPACK_DEPLOYMENT_KEY_PUB: ${{ secrets.SPACK_DEPLOYMENT_KEY_PUB }}
SPACK_DEPLOYMENT_KEY_PRIVATE: ${{ secrets.SPACK_DEPLOYMENT_KEY_PRIVATE }}
143 changes: 111 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This repository aims to be the one-stop shop for all of our container needs.

## Defining containers

The only files you should have to edit as an end-user are located in the `container_definitions` folder. There's a subfolder per architecture (currently supported: `amd64` and `arm64`) under which both `yaml` and `def` files can live.
The only files you should have to edit as an end-user are located in the `container_definitions` folder. There's a subfolder per architecture (currently supported: `amd64` and `arm64`) under which both `yaml` (in subdirectories) and `def` files can live.
* A YAML file file defines a Spack container - in it you can define the Spack specs as you would in a Spack environment. If you have specific requirements for dependencies, you can add `spack: packages: ...` keys to define those, again, as in a Spack environment.
* A def file defines a singularity container that will be built from an existing container on docker-hub. nexus-storage is already defined for amd64 as an example.

Expand Down Expand Up @@ -45,38 +45,15 @@ spacktainer:

# Developer documentation

## Components

* Spacktainerizer: the base image which contains our spack fork
* Singularitah: arm64 container with singularity and s3cmd installation for sif manipulation on arm nodes
* Spack-cacher: builds spack packages and puts them in a build cache
* Spackitor: cleans the build cache: anything that is too old or no longer used gets removed
* Spackah: builds the actual containers

## Build Order

1. base containers
* Build runtime / builder
* Build singularitah
2. packages
* Build cache
3. containers
2. application containers
* Build containers
* Every package build will be pushed to the cash directly after build
* Publish containers

## Pipeline logic

While the pipeline is organised in stages, jobs jump the queue wherever they can to optimise build times. As such, we'll ignore the stages here and look at the actual execution order:
* `generate base pipeline`: the "entrypoint" that will generate the necessary jobs to:
* build the builder, runtime and singularitah containers if necessary. These containers will be built only for the architectures needed for the final containers. These jobs will be generated only for the containers that need to be built.
* run `spack ci generate` and process its output. This is needed because Gitlab imposes a fairly tight restriction on how large a YAML file can be and Spack can easily surpass that. To work around this, we take the output YAML and split it into multiple pipelines along the generated stages.
* Clean the build cache buckets
* `base containers and pipeline generation`: will run the pipeline that was generated in the first step
* `gather child artifacts`: will collect the yaml generated in the `base containers and pipeline generation` child pipeline. This is needed because Gitlab doesn't allow triggering artifacts from a child pipeline
* `populate buildcache for amd64`: run the jobs that `spack ci generate` produced in order to populate the buildcache
* `build spacktainers for amd64`: this workflow was also generated in the `base containers and pipeline generation` child pipeline and will build the actual containers, if necessary.


## CI/CD Variables

* `AWS_CACHE_ACCESS_KEY_ID` / `AWS_CACHE_SECRET_ACCESS_KEY`: AWS keypair for accessing the cache bucket hosted by Amazon
Expand All @@ -87,12 +64,6 @@ While the pipeline is organised in stages, jobs jump the queue wherever they can
* `DOCKERHUB_USER` / `DOCKERHUB_PASSWORD`: credentials for docker hub
* `GITLAB_API_TOKEN`: private (!) gitlab token with API_READ access (CI_JOB_TOKEN does not have enough permissions). Change this once I'm gone

## Base containers

* [Singularitah](bbpgitlab.epfl.ch:5050/hpc/spacktainers/singularitah)
* [Builder](bbpgitlab.epfl.ch:5050/hpc/spacktainers/builder)
* [Runner](bbpgitlab.epfl.ch:5050/hpc/spacktainers/runtime)

## Repository layout

There are a few python projects in this repository:
Expand Down Expand Up @@ -129,6 +100,114 @@ The main entrypoints can be found, unsurprisingly, in the `__main__.py` file. Th

`utils.py` contains utility functions for reading/writing yaml, getting the multiarch job for a container, ...

## Pulling images with Sarus or Podman

Make sure you have your AWS credentials set up. Then identify the image you want to run.
In the following, `spacktainers/neurodamus-neocortex` is going to be used. Identify the
URL of the registry:
```
❯ aws ecr describe-repositories --repository-names spacktainers/neurodamus-neocortex
{
"repositories": [
{
"repositoryArn": "arn:aws:ecr:us-east-1:130659266700:repository/spacktainers/neurodamus-neocortex",
"registryId": "130659266700",
"repositoryName": "spacktainers/neurodamus-neocortex",
"repositoryUri": "130659266700.dkr.ecr.us-east-1.amazonaws.com/spacktainers/neurodamus-neocortex",
"createdAt": "2024-11-20T17:32:11.169000+01:00",
"imageTagMutability": "MUTABLE",
"imageScanningConfiguration": {
"scanOnPush": false
},
"encryptionConfiguration": {
"encryptionType": "AES256"
}
}
]
}

```
Note the `repositoryUri` key. This will be used to log in with either Podman or Sarus.

Get a login token from AWS:
```
❯ aws ecr get-login-password
[secret]
```

### Pulling with Podman

Log into the registry, using `AWS` as the username:
```
❯ aws ecr get-login-password|podman login -u AWS --password-stdin 130659266700.dkr.ecr.us-east-1.amazonaws.com
```
Then pull the full `repositoryUri`:
```
❯ podman pull 130659266700.dkr.ecr.us-east-1.amazonaws.com/spacktainers/neurodamus-neocortex
```

### Pulling with Sarus

Everything in Sarus goes into one command:
```
❯ sarus pull --login -u AWS 130659266700.dkr.ecr.us-east-1.amazonaws.com/spacktainers/neurodamus-neocortex
```

## Reproducing GitHub Action builds containerized

See above instructions under [pulling containers](#user-content-pulling-with-podman) to
login and pull the `spacktainers/builder` container.
Then launch the container and install something, i.e., with:
```
❯ podman run -it 130659266700.dkr.ecr.us-east-1.amazonaws.com/spacktainers/builder
root@43dec0527c62:/# (cd /opt/spack-repos/ && git pull)
Already up to date.
root@43dec0527c62:/# spack install zlib
[...]
```
Environments may be recreated as present under
[`container_definitions/`][(./container_definitions).

## Reproducing GitHub Action builds locally

Prerequisites needed to try the container building locally:

1. The upstream Spack commit we are using in the
[`builder/Dockerfile`](builder/Dockerfile), in the argument `SPACK_BRANCH` (may be
overwritten by the CI). Referred to as `${SPACK_BRANCH}` here.
2. Access to the S3 bucket that holds the binary cache, denoted by the `CACHE_BUCKET`
argument in the same file. Referred to as `${CACHE_BUCKET}` here.

Set up upstream Spack, and source it:
```
❯ gh repo clone spack/spack
❯ cd spack
❯ git fetch --depth=1 origin ${SPACK_BRANCH}
❯ git reset --hard FETCH_HEAD
❯ . ./share/spack/setup-env.sh
❯ cd ..
```
Then clone our own Spack fork and add the repositories:
```
❯ gh repo clone BlueBrain/spack spack-blue
❯ spack repo add --scope=site spack-blue/bluebrain/repo-patches
❯ spack repo add --scope=site spack-blue/bluebrain/repo-bluebrain
```
Configure the mirror and set the generic architecture:
```
❯ spack mirror add --scope=site build_s3 ${CACHE_BUCKET}
❯ spack config --scope=site add packages:all:require:target=x86_64_v3
```
Now the basic Spack installation should be ready to use and pull from the build cache.

Then one may pick a container specification and create environments from it, i.e.:
```
❯ spack env create brindex spacktainers/container_definitions/amd64/py-brain-indexer/spack.yaml
❯ spack env activate brindex
❯ spack concretize -f
❯ spack install
```

# Acknowledgment

The development of this software was supported by funding to the Blue Brain Project,
Expand Down
Loading
Loading