-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Redesign CI #9
Comments
Using GitHub's GPU beta runners the cost per minute is $0.07 and our e2e tests used to take 40 mins, so that is $2.8 per job. Per hour it is $4.2. Also these machines are based on Tesla T2's which are quite old now, although that is perhaps a good thing. Other CI providers have similar or greater costs. The price of an A16 machine on Vultr is $0.5 per hour for comparison, although we may have to manage creating and destroying instances. That is CPU only as well, once we bring all the dependencies for GPU into the mix then a full run could take longer. |
V100 on DataCrunch (on demand) is $0.88 per hour |
One option would be to take a QEMU VM snapshot of a running k3s cluster with the NVIDIA operator installed. Then load the VM snapshot at the start of each test and do the install etc. Problems:
Pros:
|
|
Seems there is only limited crossover between GPUs supporting virtualization/passthrough and GPUs supported by the operator. Pretty much limited to data center GPUs with passive cooling. |
It's not workable at the moment see: #9
There are a number of issues to break out here.
Possibly they all have the same solution or not, it needs to be investigated.
The text was updated successfully, but these errors were encountered: