You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Jenkins now supports, and gently recommends, storing the CI pipeline definitions in the repository being built (aka a Jenkinsfile). The advantages I see in this approach are: 1) the pipeline definition contains calls to build scripts and targets, so it is easier to coordinate changes when a single commit can change the build definition and the way it is invoked in the pipeline, and 2) changes to the pipeline definition can be treated with the same processes as other code changes (version controlled, code reviewed).
I see three challenges in migrating to this approach.
Security
Currently, we rely limit which jobs can be run on particular build agents by statically tagging the job definitions with the "Restrict where this job can be run" configuration. But pipeline definitions can apparently bypass this. Even though changes to the in-repo Jenkinsfile are code reviewed before merge, our PR validation might run an untrusted Jenkinsfile from a pull-request, and we need to make sure this can only run on our public nodes. The Job Restrictions plugin might come the the rescue here.
evaluate and implement select the right measure(s) for restricting untrusted builds to running on the public node.
Backwards compatibility
As we migrate to using a Jenkinsfile, we ideally don't want to break all in-flight pull requests, or the ability to re-run builds on historical commits. I think it is possible to have a pipeline defined in the traditional way (as Jenkins config), that checks for the existence of a Jenkinsfile, and dynamically loads that if it exists, otherwise uses the static config.
Test out this facility
Externalizing particulars of our build cluster
We don't want to hard code too many details, especially ones that might change, of our build cluster into the scripts. Use a Pipeline Shared Library to abstract over such details.
The text was updated successfully, but these errors were encountered:
Jenkins now supports, and gently recommends, storing the CI pipeline definitions in the repository being built (aka a
Jenkinsfile
). The advantages I see in this approach are: 1) the pipeline definition contains calls to build scripts and targets, so it is easier to coordinate changes when a single commit can change the build definition and the way it is invoked in the pipeline, and 2) changes to the pipeline definition can be treated with the same processes as other code changes (version controlled, code reviewed).I see three challenges in migrating to this approach.
Security
Currently, we rely limit which jobs can be run on particular build agents by statically tagging the job definitions with the "Restrict where this job can be run" configuration. But pipeline definitions can apparently bypass this. Even though changes to the in-repo
Jenkinsfile
are code reviewed before merge, our PR validation might run an untrustedJenkinsfile
from a pull-request, and we need to make sure this can only run on ourpublic
nodes. The Job Restrictions plugin might come the the rescue here.Backwards compatibility
As we migrate to using a
Jenkinsfile
, we ideally don't want to break all in-flight pull requests, or the ability to re-run builds on historical commits. I think it is possible to have a pipeline defined in the traditional way (as Jenkins config), that checks for the existence of aJenkinsfile
, and dynamically loads that if it exists, otherwise uses the static config.Externalizing particulars of our build cluster
We don't want to hard code too many details, especially ones that might change, of our build cluster into the scripts. Use a Pipeline Shared Library to abstract over such details.
The text was updated successfully, but these errors were encountered: