Skip to content

Latest commit

 

History

History
352 lines (297 loc) · 10.1 KB

pipelines.md

File metadata and controls

352 lines (297 loc) · 10.1 KB

Pipelines

This document defines Pipelines and their capabilities.


Syntax

To define a configuration file for a Pipeline resource, you can specify the following fields:

  • Required:
    • apiVersion - Specifies the API version, for example tekton.dev/v1alpha1.
    • kind - Specify the Pipeline resource object.
    • metadata - Specifies data to uniquely identify the Pipeline resource object, for example a name.
    • spec - Specifies the configuration information for your Pipeline resource object. In order for a Pipeline to do anything, the spec must include:
      • tasks - Specifies which Tasks to run and how to run them
  • Optional:

Declared resources

In order for a Pipeline to interact with the outside world, it will probably need PipelineResources which will be given to Tasks as inputs and outputs.

Your Pipeline must declare the PipelineResources it needs in a resources section in the spec, giving each a name which will be used to refer to these PipelineResources in the Tasks.

For example:

spec:
  resources:
    - name: my-repo
      type: git
    - name: my-image
      type: image

Parameters

Pipelines can declare input parameters that must be supplied to the Pipeline during a PipelineRun. Pipeline parameters can be used to replace template values in PipelineTask parameters' values.

Parameters name are limited to alpha-numeric characters, - and _ and can only start with alpha characters and _. For example, fooIs-Bar_ is a valid parameter name, barIsBa$ or 0banana are not.

Usage

The following example shows how Pipelines can be parameterized, and these parameters can be passed to the Pipeline from a PipelineRun.

Input parameters in the form of ${params.foo} are replaced inside of the PipelineTask parameters' values (see also templating).

The following Pipeline declares an input parameter called 'context', and uses it in the PipelineTask's parameter. The description and default fields for a parameter are optional, and if the default field is specified and this Pipeline is used by a PipelineRun without specifying a value for 'context', the default value will be used.

apiVersion: tekton.dev/v1alpha1
kind: Pipeline
metadata:
  name: pipeline-with-parameters
spec:
  params:
    - name: context
      description: Path to context
      default: /some/where/or/other
  tasks:
    - name: build-skaffold-web
      taskRef:
        name: build-push
      params:
        - name: pathToDockerFile
          value: Dockerfile
        - name: pathToContext
          value: "${params.context}"

The following PipelineRun supplies a value for context:

apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
  name: pipelinerun-with-parameters
spec:
  pipelineRef:
    name: pipeline-with-parameters
  params:
    - name: "context"
      value: "/workspace/examples/microservices/leeroy-web"

Pipeline Tasks

A Pipeline will execute a graph of Tasks (see ordering for how to express this graph). At a minimum, this declaration must include a reference to the Task:

tasks:
  - name: build-the-image
    taskRef:
      name: build-push

Declared PipelineResources can be given to Tasks in the Pipeline as inputs and outputs, for example:

spec:
  tasks:
    - name: build-the-image
      taskRef:
        name: build-push
      resources:
        inputs:
          - name: workspace
            resource: my-repo
        outputs:
          - name: image
            resource: my-image

Parameters can also be provided:

spec:
  tasks:
    - name: build-skaffold-web
      taskRef:
        name: build-push
      params:
        - name: pathToDockerFile
          value: Dockerfile
        - name: pathToContext
          value: /workspace/examples/microservices/leeroy-web

from

Sometimes you will have Pipeline Tasks that need to take as input the output of a previous Task, for example, an image built by a previous Task.

Express this dependency by adding from on PipelineResources that your Tasks need.

  • The (optional) from key on an input source defines a set of previous PipelineTasks (i.e. the named instance of a Task) in the Pipeline
  • When the from key is specified on an input source, the version of the resource that is from the defined list of tasks is used
  • from can support fan in and fan out
  • The from clause expresses ordering, i.e. the Pipeline Task which provides the PipelineResource must run before the Pipeline Task which needs that PipelineResource as an input
    • The name of the PipelineResource must correspond to a PipelineResource from the Task that the referenced PipelineTask gives as an output

For example see this Pipeline spec:

- name: build-app
  taskRef:
    name: build-push
  resources:
    outputs:
      - name: image
        resource: my-image
- name: deploy-app
  taskRef:
    name: deploy-kubectl
  resources:
    inputs:
      - name: my-image
        from:
          - build-app

The resource my-image is expected to be given to the deploy-app Task from the build-app Task. This means that the PipelineResource my-image must also be declared as an output of build-app.

This also means that the build-app Pipeline Task will run before deploy-app, regardless of the order they appear in the spec.

runAfter

Sometimes you will need to have Pipeline Tasks that need to run in a certain order, but they do not have an explicit output to input dependency (which is expressed via from). In this case you can use runAfter to indicate that a Pipeline Task should be run after one or more previous Pipeline Tasks.

For example see this Pipeline spec:

- name: test-app
  taskRef:
    name: make-test
  resources:
    inputs:
      - name: my-repo
- name: build-app
  taskRef:
    name: kaniko-build
  runAfter:
    - test-app
  resources:
    inputs:
      - name: my-repo

In this Pipeline, we want to test the code before we build from it, but there is no output from test-app, so build-app uses runAfter to indicate that test-app should run before it, regardless of the order they appear in the spec.

Ordering

The Pipeline Tasks in a Pipeline can be connected and run in a graph, specifically a Directed Acyclic Graph or DAG. Each of the Pipeline Tasks is a node, which can be connected (i.e. a Graph) such that one will run before another (i.e. Directed), and the execution will eventually complete (i.e. Acyclic, it will not get caught in infinite loops).

This is done using:

For example see this Pipeline spec:

- name: lint-repo
  taskRef:
    name: pylint
  resources:
    inputs:
      - name: my-repo
- name: test-app
  taskRef:
    name: make-test
  resources:
    inputs:
      - name: my-repo
- name: build-app
  taskRef:
    name: kaniko-build-app
  runAfter:
    - test-app
  resources:
    inputs:
      - name: my-repo
    outputs:
      - name: image
        resource: my-app-image
- name: build-frontend
  taskRef:
    name: kaniko-build-frontend
  runAfter:
    - test-app
  resources:
    inputs:
      - name: my-repo
    outputs:
      - name: image
        resource: my-frontend-image
- name: deploy-all
  taskRef:
    name: deploy-kubectl
  resources:
    inputs:
      - name: my-app-image
        from:
          - build-app
      - name: my-frontend-image
        from:
          - build-frontend

This will result in the following execution graph:

        |            |
        v            v
     test-app    lint-repo
    /        \
   v          v
build-app  build-frontend
   \          /
    v        v
    deploy-all
  1. The lint-repo and test-app Pipeline Tasks will begin executing simultaneously. (They have no from or runAfter clauses.)
  2. Once test-app completes, both build-app and build-frontend will begin executing simultaneously (both runAfter test-app).
  3. When both build-app and build-frontend have completed, deploy-all will execute (it requires PipelineResources from both Pipeline Tasks).
  4. The entire Pipeline will be finished executing after lint-repo and deploy-all have completed.

Examples

For complete examples, see the examples folder.


Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.