This document defines Pipelines
and their capabilities.
To define a configuration file for a Pipeline
resource, you can specify the
following fields:
- Required:
apiVersion
- Specifies the API version, for exampletekton.dev/v1alpha1
.kind
- Specify thePipeline
resource object.metadata
- Specifies data to uniquely identify thePipeline
resource object, for example aname
.spec
- Specifies the configuration information for yourPipeline
resource object. In order for aPipeline
to do anything, the spec must include:tasks
- Specifies whichTasks
to run and how to run them
- Optional:
resources
- Specifies whichPipelineResources
of which types thePipeline
will be using in its Taskstasks
resources.inputs
/resource.outputs
from
- Used when the content of thePipelineResource
should come from the output of a previous Pipeline TaskrunAfter
- Used when the Pipeline Task should be executed after another Pipeline Task, but there is no output linking required
In order for a Pipeline
to interact with the outside world, it will probably
need PipelineResources
which will be given to
Tasks
as inputs and outputs.
Your Pipeline
must declare the PipelineResources
it needs in a resources
section in the spec
, giving each a name which will be used to refer to these
PipelineResources
in the Tasks
.
For example:
spec:
resources:
- name: my-repo
type: git
- name: my-image
type: image
Pipeline
s can declare input parameters that must be supplied to the Pipeline
during a PipelineRun
. Pipeline parameters can be used to replace template
values in PipelineTask
parameters' values.
Parameters name are limited to alpha-numeric characters, -
and _
and can
only start with alpha characters and _
. For example, fooIs-Bar_
is a valid
parameter name, barIsBa$
or 0banana
are not.
The following example shows how Pipeline
s can be parameterized, and these
parameters can be passed to the Pipeline
from a PipelineRun
.
Input parameters in the form of ${params.foo}
are replaced inside of the
PipelineTask
parameters' values (see also
templating).
The following Pipeline
declares an input parameter called 'context', and uses
it in the PipelineTask
's parameter. The description
and default
fields for
a parameter are optional, and if the default
field is specified and this
Pipeline
is used by a PipelineRun
without specifying a value for 'context',
the default
value will be used.
apiVersion: tekton.dev/v1alpha1
kind: Pipeline
metadata:
name: pipeline-with-parameters
spec:
params:
- name: context
description: Path to context
default: /some/where/or/other
tasks:
- name: build-skaffold-web
taskRef:
name: build-push
params:
- name: pathToDockerFile
value: Dockerfile
- name: pathToContext
value: "${params.context}"
The following PipelineRun
supplies a value for context
:
apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
name: pipelinerun-with-parameters
spec:
pipelineRef:
name: pipeline-with-parameters
params:
- name: "context"
value: "/workspace/examples/microservices/leeroy-web"
A Pipeline
will execute a graph of Tasks
(see
ordering for how to express this graph). At a minimum, this
declaration must include a reference to the Task
:
tasks:
- name: build-the-image
taskRef:
name: build-push
Declared PipelineResources
can be given to Task
s in
the Pipeline
as inputs and outputs, for example:
spec:
tasks:
- name: build-the-image
taskRef:
name: build-push
resources:
inputs:
- name: workspace
resource: my-repo
outputs:
- name: image
resource: my-image
Parameters can also be provided:
spec:
tasks:
- name: build-skaffold-web
taskRef:
name: build-push
params:
- name: pathToDockerFile
value: Dockerfile
- name: pathToContext
value: /workspace/examples/microservices/leeroy-web
Sometimes you will have Pipeline Tasks that need to take as
input the output of a previous Task
, for example, an image built by a previous
Task
.
Express this dependency by adding from
on PipelineResources
that your Tasks
need.
- The (optional)
from
key on aninput source
defines a set of previousPipelineTasks
(i.e. the named instance of aTask
) in thePipeline
- When the
from
key is specified on an input source, the version of the resource that is from the defined list of tasks is used from
can support fan in and fan out- The
from
clause expresses ordering, i.e. the Pipeline Task which provides thePipelineResource
must run before the Pipeline Task which needs thatPipelineResource
as an input- The name of the
PipelineResource
must correspond to aPipelineResource
from theTask
that the referencedPipelineTask
gives as an output
- The name of the
For example see this Pipeline
spec:
- name: build-app
taskRef:
name: build-push
resources:
outputs:
- name: image
resource: my-image
- name: deploy-app
taskRef:
name: deploy-kubectl
resources:
inputs:
- name: my-image
from:
- build-app
The resource my-image
is expected to be given to the deploy-app
Task
from
the build-app
Task
. This means that the PipelineResource
my-image
must
also be declared as an output of build-app
.
This also means that the build-app
Pipeline Task will run before deploy-app
,
regardless of the order they appear in the spec.
Sometimes you will need to have Pipeline Tasks that need to
run in a certain order, but they do not have an explicit
output to input dependency (which is
expressed via from
). In this case you can use runAfter
to indicate
that a Pipeline Task should be run after one or more previous Pipeline Tasks.
For example see this Pipeline
spec:
- name: test-app
taskRef:
name: make-test
resources:
inputs:
- name: my-repo
- name: build-app
taskRef:
name: kaniko-build
runAfter:
- test-app
resources:
inputs:
- name: my-repo
In this Pipeline
, we want to test the code before we build from it, but there
is no output from test-app
, so build-app
uses runAfter
to indicate that
test-app
should run before it, regardless of the order they appear in the
spec.
The Pipeline Tasks in a Pipeline
can be connected and run
in a graph, specifically a Directed Acyclic Graph or DAG. Each of the Pipeline
Tasks is a node, which can be connected (i.e. a Graph) such that one will run
before another (i.e. Directed), and the execution will eventually complete
(i.e. Acyclic, it will not get caught in infinite loops).
This is done using:
from
clauses on thePipelineResources
needed by aTask
runAfter
clauses on the Pipeline Tasks
For example see this Pipeline
spec:
- name: lint-repo
taskRef:
name: pylint
resources:
inputs:
- name: my-repo
- name: test-app
taskRef:
name: make-test
resources:
inputs:
- name: my-repo
- name: build-app
taskRef:
name: kaniko-build-app
runAfter:
- test-app
resources:
inputs:
- name: my-repo
outputs:
- name: image
resource: my-app-image
- name: build-frontend
taskRef:
name: kaniko-build-frontend
runAfter:
- test-app
resources:
inputs:
- name: my-repo
outputs:
- name: image
resource: my-frontend-image
- name: deploy-all
taskRef:
name: deploy-kubectl
resources:
inputs:
- name: my-app-image
from:
- build-app
- name: my-frontend-image
from:
- build-frontend
This will result in the following execution graph:
| |
v v
test-app lint-repo
/ \
v v
build-app build-frontend
\ /
v v
deploy-all
- The
lint-repo
andtest-app
Pipeline Tasks will begin executing simultaneously. (They have nofrom
orrunAfter
clauses.) - Once
test-app
completes, bothbuild-app
andbuild-frontend
will begin executing simultaneously (bothrunAfter
test-app
). - When both
build-app
andbuild-frontend
have completed,deploy-all
will execute (it requiresPipelineResources
from both Pipeline Tasks). - The entire
Pipeline
will be finished executing afterlint-repo
anddeploy-all
have completed.
For complete examples, see the examples folder.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.